00:00:00.001 Started by upstream project "autotest-nightly" build number 4312 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3675 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.234 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.238 The recommended git tool is: git 00:00:00.239 using credential 00000000-0000-0000-0000-000000000002 00:00:00.242 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.274 Fetching changes from the remote Git repository 00:00:00.276 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.305 Using shallow fetch with depth 1 00:00:00.305 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.305 > git --version # timeout=10 00:00:00.327 > git --version # 'git version 2.39.2' 00:00:00.327 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.338 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.338 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.803 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.815 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.829 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.829 > git config core.sparsecheckout # timeout=10 00:00:08.840 > git read-tree -mu HEAD # timeout=10 00:00:08.856 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.878 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.879 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.954 [Pipeline] Start of Pipeline 00:00:08.970 [Pipeline] library 00:00:08.972 Loading library shm_lib@master 00:00:08.972 Library shm_lib@master is cached. Copying from home. 00:00:08.993 [Pipeline] node 00:00:09.040 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:09.042 [Pipeline] { 00:00:09.054 [Pipeline] catchError 00:00:09.056 [Pipeline] { 00:00:09.071 [Pipeline] wrap 00:00:09.084 [Pipeline] { 00:00:09.097 [Pipeline] stage 00:00:09.100 [Pipeline] { (Prologue) 00:00:09.286 [Pipeline] sh 00:00:09.568 + logger -p user.info -t JENKINS-CI 00:00:09.587 [Pipeline] echo 00:00:09.588 Node: WFP21 00:00:09.598 [Pipeline] sh 00:00:09.902 [Pipeline] setCustomBuildProperty 00:00:09.912 [Pipeline] echo 00:00:09.913 Cleanup processes 00:00:09.918 [Pipeline] sh 00:00:10.202 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:10.202 1379573 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:10.215 [Pipeline] sh 00:00:10.501 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:10.501 ++ grep -v 'sudo pgrep' 00:00:10.501 ++ awk '{print $1}' 00:00:10.501 + sudo kill -9 00:00:10.501 + true 00:00:10.517 [Pipeline] cleanWs 00:00:10.528 [WS-CLEANUP] Deleting project workspace... 00:00:10.528 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.535 [WS-CLEANUP] done 00:00:10.540 [Pipeline] setCustomBuildProperty 00:00:10.556 [Pipeline] sh 00:00:10.842 + sudo git config --global --replace-all safe.directory '*' 00:00:10.937 [Pipeline] httpRequest 00:00:11.320 [Pipeline] echo 00:00:11.322 Sorcerer 10.211.164.20 is alive 00:00:11.332 [Pipeline] retry 00:00:11.334 [Pipeline] { 00:00:11.348 [Pipeline] httpRequest 00:00:11.352 HttpMethod: GET 00:00:11.353 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.353 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.377 Response Code: HTTP/1.1 200 OK 00:00:11.377 Success: Status code 200 is in the accepted range: 200,404 00:00:11.377 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.105 [Pipeline] } 00:00:16.123 [Pipeline] // retry 00:00:16.131 [Pipeline] sh 00:00:16.415 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.430 [Pipeline] httpRequest 00:00:16.825 [Pipeline] echo 00:00:16.827 Sorcerer 10.211.164.20 is alive 00:00:16.838 [Pipeline] retry 00:00:16.840 [Pipeline] { 00:00:16.856 [Pipeline] httpRequest 00:00:16.861 HttpMethod: GET 00:00:16.862 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:16.862 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:16.881 Response Code: HTTP/1.1 200 OK 00:00:16.881 Success: Status code 200 is in the accepted range: 200,404 00:00:16.882 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:24.168 [Pipeline] } 00:01:24.186 [Pipeline] // retry 00:01:24.194 [Pipeline] sh 00:01:24.476 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:27.022 [Pipeline] sh 00:01:27.301 + git -C spdk log --oneline -n5 00:01:27.301 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:27.301 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:27.301 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:27.301 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:27.301 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:27.309 [Pipeline] } 00:01:27.318 [Pipeline] // stage 00:01:27.324 [Pipeline] stage 00:01:27.326 [Pipeline] { (Prepare) 00:01:27.336 [Pipeline] writeFile 00:01:27.346 [Pipeline] sh 00:01:27.621 + logger -p user.info -t JENKINS-CI 00:01:27.633 [Pipeline] sh 00:01:27.912 + logger -p user.info -t JENKINS-CI 00:01:27.924 [Pipeline] sh 00:01:28.205 + cat autorun-spdk.conf 00:01:28.205 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.205 SPDK_TEST_NVMF=1 00:01:28.205 SPDK_TEST_NVME_CLI=1 00:01:28.205 SPDK_TEST_NVMF_NICS=mlx5 00:01:28.205 SPDK_RUN_ASAN=1 00:01:28.205 SPDK_RUN_UBSAN=1 00:01:28.205 NET_TYPE=phy 00:01:28.213 RUN_NIGHTLY=1 00:01:28.218 [Pipeline] readFile 00:01:28.242 [Pipeline] withEnv 00:01:28.244 [Pipeline] { 00:01:28.257 [Pipeline] sh 00:01:28.540 + set -ex 00:01:28.540 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:28.540 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:28.540 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.540 ++ SPDK_TEST_NVMF=1 00:01:28.540 ++ SPDK_TEST_NVME_CLI=1 00:01:28.540 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:28.540 ++ SPDK_RUN_ASAN=1 00:01:28.540 ++ SPDK_RUN_UBSAN=1 00:01:28.540 ++ NET_TYPE=phy 00:01:28.540 ++ RUN_NIGHTLY=1 00:01:28.540 + case $SPDK_TEST_NVMF_NICS in 00:01:28.540 + DRIVERS=mlx5_ib 00:01:28.540 + [[ -n mlx5_ib ]] 00:01:28.540 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:28.540 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:35.109 rmmod: ERROR: Module irdma is not currently loaded 00:01:35.109 rmmod: ERROR: Module i40iw is not currently loaded 00:01:35.109 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:35.109 + true 00:01:35.109 + for D in $DRIVERS 00:01:35.109 + sudo modprobe mlx5_ib 00:01:35.109 + exit 0 00:01:35.118 [Pipeline] } 00:01:35.133 [Pipeline] // withEnv 00:01:35.138 [Pipeline] } 00:01:35.152 [Pipeline] // stage 00:01:35.162 [Pipeline] catchError 00:01:35.164 [Pipeline] { 00:01:35.178 [Pipeline] timeout 00:01:35.178 Timeout set to expire in 1 hr 0 min 00:01:35.181 [Pipeline] { 00:01:35.196 [Pipeline] stage 00:01:35.199 [Pipeline] { (Tests) 00:01:35.214 [Pipeline] sh 00:01:35.495 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:35.495 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:35.495 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:35.495 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:35.495 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:35.495 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:35.495 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:35.495 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:35.495 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:35.495 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:35.495 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:35.495 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:35.495 + source /etc/os-release 00:01:35.495 ++ NAME='Fedora Linux' 00:01:35.495 ++ VERSION='39 (Cloud Edition)' 00:01:35.495 ++ ID=fedora 00:01:35.495 ++ VERSION_ID=39 00:01:35.495 ++ VERSION_CODENAME= 00:01:35.495 ++ PLATFORM_ID=platform:f39 00:01:35.495 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:35.495 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:35.495 ++ LOGO=fedora-logo-icon 00:01:35.495 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:35.495 ++ HOME_URL=https://fedoraproject.org/ 00:01:35.495 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:35.495 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:35.495 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:35.495 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:35.495 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:35.495 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:35.495 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:35.495 ++ SUPPORT_END=2024-11-12 00:01:35.495 ++ VARIANT='Cloud Edition' 00:01:35.495 ++ VARIANT_ID=cloud 00:01:35.495 + uname -a 00:01:35.495 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:35.495 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:38.779 Hugepages 00:01:38.780 node hugesize free / total 00:01:38.780 node0 1048576kB 0 / 0 00:01:38.780 node0 2048kB 0 / 0 00:01:38.780 node1 1048576kB 0 / 0 00:01:38.780 node1 2048kB 0 / 0 00:01:38.780 00:01:38.780 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.780 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:38.780 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:38.780 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:38.780 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:38.780 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:38.780 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:38.780 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:38.780 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:38.780 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:38.780 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:38.780 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:38.780 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:38.780 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:38.780 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:38.780 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:39.040 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:39.040 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:39.040 + rm -f /tmp/spdk-ld-path 00:01:39.040 + source autorun-spdk.conf 00:01:39.040 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.040 ++ SPDK_TEST_NVMF=1 00:01:39.040 ++ SPDK_TEST_NVME_CLI=1 00:01:39.040 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:39.040 ++ SPDK_RUN_ASAN=1 00:01:39.040 ++ SPDK_RUN_UBSAN=1 00:01:39.040 ++ NET_TYPE=phy 00:01:39.040 ++ RUN_NIGHTLY=1 00:01:39.040 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:39.040 + [[ -n '' ]] 00:01:39.040 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:39.040 + for M in /var/spdk/build-*-manifest.txt 00:01:39.040 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:39.040 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:39.040 + for M in /var/spdk/build-*-manifest.txt 00:01:39.040 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:39.040 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:39.040 + for M in /var/spdk/build-*-manifest.txt 00:01:39.040 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:39.040 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:39.040 ++ uname 00:01:39.040 + [[ Linux == \L\i\n\u\x ]] 00:01:39.040 + sudo dmesg -T 00:01:39.040 + sudo dmesg --clear 00:01:39.299 + dmesg_pid=1381204 00:01:39.299 + [[ Fedora Linux == FreeBSD ]] 00:01:39.299 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.299 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.300 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:39.300 + sudo dmesg -Tw 00:01:39.300 + [[ -x /usr/src/fio-static/fio ]] 00:01:39.300 + export FIO_BIN=/usr/src/fio-static/fio 00:01:39.300 + FIO_BIN=/usr/src/fio-static/fio 00:01:39.300 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:39.300 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:39.300 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:39.300 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.300 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.300 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:39.300 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.300 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.300 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:39.300 10:34:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:39.300 10:34:18 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:39.300 10:34:18 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.300 10:34:18 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:39.300 10:34:18 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:39.300 10:34:18 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:39.300 10:34:18 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:39.300 10:34:18 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:39.300 10:34:18 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ NET_TYPE=phy 00:01:39.300 10:34:18 -- nvmf-phy-autotest/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:01:39.300 10:34:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:39.300 10:34:18 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:39.300 10:34:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:39.300 10:34:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:39.300 10:34:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:39.300 10:34:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.300 10:34:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.300 10:34:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.300 10:34:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.300 10:34:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.300 10:34:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.300 10:34:18 -- paths/export.sh@5 -- $ export PATH 00:01:39.300 10:34:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.300 10:34:18 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:39.300 10:34:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:39.300 10:34:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732786458.XXXXXX 00:01:39.300 10:34:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732786458.qO26tp 00:01:39.300 10:34:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:39.300 10:34:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:39.300 10:34:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:39.300 10:34:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:39.300 10:34:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.300 10:34:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:39.300 10:34:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:39.300 10:34:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.300 10:34:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:39.300 10:34:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:39.300 10:34:18 -- pm/common@17 -- $ local monitor 00:01:39.300 10:34:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.300 10:34:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.300 10:34:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.300 10:34:18 -- pm/common@21 -- $ date +%s 00:01:39.300 10:34:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.300 10:34:18 -- pm/common@21 -- $ date +%s 00:01:39.300 10:34:18 -- pm/common@25 -- $ sleep 1 00:01:39.300 10:34:18 -- pm/common@21 -- $ date +%s 00:01:39.300 10:34:18 -- pm/common@21 -- $ date +%s 00:01:39.300 10:34:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732786458 00:01:39.300 10:34:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732786458 00:01:39.300 10:34:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732786458 00:01:39.300 10:34:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732786458 00:01:39.559 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732786458_collect-vmstat.pm.log 00:01:39.559 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732786458_collect-cpu-load.pm.log 00:01:39.559 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732786458_collect-cpu-temp.pm.log 00:01:39.559 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732786458_collect-bmc-pm.bmc.pm.log 00:01:40.496 10:34:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:40.496 10:34:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:40.496 10:34:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:40.496 10:34:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:40.496 10:34:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:40.496 Thu Nov 28 09:34:19 AM UTC 2024 00:01:40.496 10:34:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:40.496 v25.01-pre-276-g35cd3e84d 00:01:40.496 10:34:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:40.496 10:34:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:40.496 10:34:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.496 10:34:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.496 10:34:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.496 ************************************ 00:01:40.496 START TEST asan 00:01:40.496 ************************************ 00:01:40.496 10:34:19 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:40.496 using asan 00:01:40.496 00:01:40.496 real 0m0.001s 00:01:40.496 user 0m0.001s 00:01:40.496 sys 0m0.000s 00:01:40.496 10:34:19 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.496 10:34:19 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.496 ************************************ 00:01:40.496 END TEST asan 00:01:40.496 ************************************ 00:01:40.496 10:34:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:40.496 10:34:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:40.496 10:34:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.496 10:34:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.496 10:34:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.496 ************************************ 00:01:40.496 START TEST ubsan 00:01:40.496 ************************************ 00:01:40.496 10:34:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:40.496 using ubsan 00:01:40.496 00:01:40.496 real 0m0.000s 00:01:40.496 user 0m0.000s 00:01:40.496 sys 0m0.000s 00:01:40.496 10:34:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.496 10:34:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.496 ************************************ 00:01:40.496 END TEST ubsan 00:01:40.496 ************************************ 00:01:40.496 10:34:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:40.496 10:34:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:40.496 10:34:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:40.496 10:34:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:40.496 10:34:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:40.496 10:34:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:40.496 10:34:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:40.496 10:34:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:40.496 10:34:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:40.756 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:40.756 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:41.015 Using 'verbs' RDMA provider 00:01:54.170 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:09.054 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:09.054 Creating mk/config.mk...done. 00:02:09.054 Creating mk/cc.flags.mk...done. 00:02:09.054 Type 'make' to build. 00:02:09.054 10:34:46 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:09.054 10:34:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.054 10:34:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.054 10:34:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.054 ************************************ 00:02:09.054 START TEST make 00:02:09.054 ************************************ 00:02:09.054 10:34:46 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:09.054 make[1]: Nothing to be done for 'all'. 00:02:17.268 The Meson build system 00:02:17.268 Version: 1.5.0 00:02:17.268 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:17.268 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:17.268 Build type: native build 00:02:17.268 Program cat found: YES (/usr/bin/cat) 00:02:17.268 Project name: DPDK 00:02:17.268 Project version: 24.03.0 00:02:17.268 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:17.268 C linker for the host machine: cc ld.bfd 2.40-14 00:02:17.268 Host machine cpu family: x86_64 00:02:17.268 Host machine cpu: x86_64 00:02:17.268 Message: ## Building in Developer Mode ## 00:02:17.268 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.268 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:17.268 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.268 Program python3 found: YES (/usr/bin/python3) 00:02:17.268 Program cat found: YES (/usr/bin/cat) 00:02:17.268 Compiler for C supports arguments -march=native: YES 00:02:17.268 Checking for size of "void *" : 8 00:02:17.268 Checking for size of "void *" : 8 (cached) 00:02:17.268 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:17.268 Library m found: YES 00:02:17.268 Library numa found: YES 00:02:17.268 Has header "numaif.h" : YES 00:02:17.268 Library fdt found: NO 00:02:17.268 Library execinfo found: NO 00:02:17.268 Has header "execinfo.h" : YES 00:02:17.268 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.268 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.268 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.268 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.268 Run-time dependency openssl found: YES 3.1.1 00:02:17.268 Run-time dependency libpcap found: YES 1.10.4 00:02:17.268 Has header "pcap.h" with dependency libpcap: YES 00:02:17.268 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.268 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.268 Compiler for C supports arguments -Wformat: YES 00:02:17.268 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:17.269 Compiler for C supports arguments -Wformat-security: NO 00:02:17.269 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.269 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.269 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.269 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.269 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.269 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.269 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.269 Compiler for C supports arguments -Wundef: YES 00:02:17.269 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.269 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.269 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:17.269 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.269 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:17.269 Program objdump found: YES (/usr/bin/objdump) 00:02:17.269 Compiler for C supports arguments -mavx512f: YES 00:02:17.269 Checking if "AVX512 checking" compiles: YES 00:02:17.269 Fetching value of define "__SSE4_2__" : 1 00:02:17.269 Fetching value of define "__AES__" : 1 00:02:17.269 Fetching value of define "__AVX__" : 1 00:02:17.269 Fetching value of define "__AVX2__" : 1 00:02:17.269 Fetching value of define "__AVX512BW__" : 1 00:02:17.269 Fetching value of define "__AVX512CD__" : 1 00:02:17.269 Fetching value of define "__AVX512DQ__" : 1 00:02:17.269 Fetching value of define "__AVX512F__" : 1 00:02:17.269 Fetching value of define "__AVX512VL__" : 1 00:02:17.269 Fetching value of define "__PCLMUL__" : 1 00:02:17.269 Fetching value of define "__RDRND__" : 1 00:02:17.269 Fetching value of define "__RDSEED__" : 1 00:02:17.269 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.269 Fetching value of define "__znver1__" : (undefined) 00:02:17.269 Fetching value of define "__znver2__" : (undefined) 00:02:17.269 Fetching value of define "__znver3__" : (undefined) 00:02:17.269 Fetching value of define "__znver4__" : (undefined) 00:02:17.269 Library asan found: YES 00:02:17.269 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:17.269 Message: lib/log: Defining dependency "log" 00:02:17.269 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.269 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.269 Library rt found: YES 00:02:17.269 Checking for function "getentropy" : NO 00:02:17.269 Message: lib/eal: Defining dependency "eal" 00:02:17.269 Message: lib/ring: Defining dependency "ring" 00:02:17.269 Message: lib/rcu: Defining dependency "rcu" 00:02:17.269 Message: lib/mempool: Defining dependency "mempool" 00:02:17.269 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.269 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.269 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.269 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.269 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.269 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.269 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:17.269 Compiler for C supports arguments -mpclmul: YES 00:02:17.269 Compiler for C supports arguments -maes: YES 00:02:17.269 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.269 Compiler for C supports arguments -mavx512bw: YES 00:02:17.269 Compiler for C supports arguments -mavx512dq: YES 00:02:17.269 Compiler for C supports arguments -mavx512vl: YES 00:02:17.269 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.269 Compiler for C supports arguments -mavx2: YES 00:02:17.269 Compiler for C supports arguments -mavx: YES 00:02:17.269 Message: lib/net: Defining dependency "net" 00:02:17.269 Message: lib/meter: Defining dependency "meter" 00:02:17.269 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.269 Message: lib/pci: Defining dependency "pci" 00:02:17.269 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.269 Message: lib/hash: Defining dependency "hash" 00:02:17.269 Message: lib/timer: Defining dependency "timer" 00:02:17.269 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.269 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.269 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.269 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.269 Message: lib/power: Defining dependency "power" 00:02:17.269 Message: lib/reorder: Defining dependency "reorder" 00:02:17.269 Message: lib/security: Defining dependency "security" 00:02:17.269 Has header "linux/userfaultfd.h" : YES 00:02:17.269 Has header "linux/vduse.h" : YES 00:02:17.269 Message: lib/vhost: Defining dependency "vhost" 00:02:17.269 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:17.269 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.269 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:17.269 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.269 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:17.269 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:17.269 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:17.269 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:17.269 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:17.269 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:17.269 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:17.269 Configuring doxy-api-html.conf using configuration 00:02:17.269 Configuring doxy-api-man.conf using configuration 00:02:17.269 Program mandb found: YES (/usr/bin/mandb) 00:02:17.269 Program sphinx-build found: NO 00:02:17.269 Configuring rte_build_config.h using configuration 00:02:17.269 Message: 00:02:17.269 ================= 00:02:17.269 Applications Enabled 00:02:17.269 ================= 00:02:17.269 00:02:17.269 apps: 00:02:17.269 00:02:17.269 00:02:17.269 Message: 00:02:17.269 ================= 00:02:17.269 Libraries Enabled 00:02:17.269 ================= 00:02:17.269 00:02:17.269 libs: 00:02:17.269 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:17.269 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:17.269 cryptodev, dmadev, power, reorder, security, vhost, 00:02:17.269 00:02:17.269 Message: 00:02:17.269 =============== 00:02:17.269 Drivers Enabled 00:02:17.269 =============== 00:02:17.269 00:02:17.269 common: 00:02:17.269 00:02:17.269 bus: 00:02:17.269 pci, vdev, 00:02:17.269 mempool: 00:02:17.269 ring, 00:02:17.269 dma: 00:02:17.269 00:02:17.269 net: 00:02:17.269 00:02:17.269 crypto: 00:02:17.269 00:02:17.269 compress: 00:02:17.269 00:02:17.269 vdpa: 00:02:17.269 00:02:17.269 00:02:17.269 Message: 00:02:17.269 ================= 00:02:17.269 Content Skipped 00:02:17.269 ================= 00:02:17.269 00:02:17.269 apps: 00:02:17.269 dumpcap: explicitly disabled via build config 00:02:17.269 graph: explicitly disabled via build config 00:02:17.269 pdump: explicitly disabled via build config 00:02:17.269 proc-info: explicitly disabled via build config 00:02:17.269 test-acl: explicitly disabled via build config 00:02:17.269 test-bbdev: explicitly disabled via build config 00:02:17.269 test-cmdline: explicitly disabled via build config 00:02:17.269 test-compress-perf: explicitly disabled via build config 00:02:17.269 test-crypto-perf: explicitly disabled via build config 00:02:17.269 test-dma-perf: explicitly disabled via build config 00:02:17.269 test-eventdev: explicitly disabled via build config 00:02:17.269 test-fib: explicitly disabled via build config 00:02:17.269 test-flow-perf: explicitly disabled via build config 00:02:17.269 test-gpudev: explicitly disabled via build config 00:02:17.269 test-mldev: explicitly disabled via build config 00:02:17.269 test-pipeline: explicitly disabled via build config 00:02:17.269 test-pmd: explicitly disabled via build config 00:02:17.269 test-regex: explicitly disabled via build config 00:02:17.269 test-sad: explicitly disabled via build config 00:02:17.269 test-security-perf: explicitly disabled via build config 00:02:17.269 00:02:17.269 libs: 00:02:17.269 argparse: explicitly disabled via build config 00:02:17.269 metrics: explicitly disabled via build config 00:02:17.269 acl: explicitly disabled via build config 00:02:17.269 bbdev: explicitly disabled via build config 00:02:17.269 bitratestats: explicitly disabled via build config 00:02:17.269 bpf: explicitly disabled via build config 00:02:17.269 cfgfile: explicitly disabled via build config 00:02:17.269 distributor: explicitly disabled via build config 00:02:17.269 efd: explicitly disabled via build config 00:02:17.269 eventdev: explicitly disabled via build config 00:02:17.269 dispatcher: explicitly disabled via build config 00:02:17.269 gpudev: explicitly disabled via build config 00:02:17.269 gro: explicitly disabled via build config 00:02:17.269 gso: explicitly disabled via build config 00:02:17.269 ip_frag: explicitly disabled via build config 00:02:17.269 jobstats: explicitly disabled via build config 00:02:17.269 latencystats: explicitly disabled via build config 00:02:17.269 lpm: explicitly disabled via build config 00:02:17.269 member: explicitly disabled via build config 00:02:17.269 pcapng: explicitly disabled via build config 00:02:17.269 rawdev: explicitly disabled via build config 00:02:17.269 regexdev: explicitly disabled via build config 00:02:17.269 mldev: explicitly disabled via build config 00:02:17.269 rib: explicitly disabled via build config 00:02:17.269 sched: explicitly disabled via build config 00:02:17.269 stack: explicitly disabled via build config 00:02:17.269 ipsec: explicitly disabled via build config 00:02:17.269 pdcp: explicitly disabled via build config 00:02:17.269 fib: explicitly disabled via build config 00:02:17.269 port: explicitly disabled via build config 00:02:17.269 pdump: explicitly disabled via build config 00:02:17.269 table: explicitly disabled via build config 00:02:17.269 pipeline: explicitly disabled via build config 00:02:17.269 graph: explicitly disabled via build config 00:02:17.269 node: explicitly disabled via build config 00:02:17.269 00:02:17.269 drivers: 00:02:17.269 common/cpt: not in enabled drivers build config 00:02:17.269 common/dpaax: not in enabled drivers build config 00:02:17.269 common/iavf: not in enabled drivers build config 00:02:17.269 common/idpf: not in enabled drivers build config 00:02:17.270 common/ionic: not in enabled drivers build config 00:02:17.270 common/mvep: not in enabled drivers build config 00:02:17.270 common/octeontx: not in enabled drivers build config 00:02:17.270 bus/auxiliary: not in enabled drivers build config 00:02:17.270 bus/cdx: not in enabled drivers build config 00:02:17.270 bus/dpaa: not in enabled drivers build config 00:02:17.270 bus/fslmc: not in enabled drivers build config 00:02:17.270 bus/ifpga: not in enabled drivers build config 00:02:17.270 bus/platform: not in enabled drivers build config 00:02:17.270 bus/uacce: not in enabled drivers build config 00:02:17.270 bus/vmbus: not in enabled drivers build config 00:02:17.270 common/cnxk: not in enabled drivers build config 00:02:17.270 common/mlx5: not in enabled drivers build config 00:02:17.270 common/nfp: not in enabled drivers build config 00:02:17.270 common/nitrox: not in enabled drivers build config 00:02:17.270 common/qat: not in enabled drivers build config 00:02:17.270 common/sfc_efx: not in enabled drivers build config 00:02:17.270 mempool/bucket: not in enabled drivers build config 00:02:17.270 mempool/cnxk: not in enabled drivers build config 00:02:17.270 mempool/dpaa: not in enabled drivers build config 00:02:17.270 mempool/dpaa2: not in enabled drivers build config 00:02:17.270 mempool/octeontx: not in enabled drivers build config 00:02:17.270 mempool/stack: not in enabled drivers build config 00:02:17.270 dma/cnxk: not in enabled drivers build config 00:02:17.270 dma/dpaa: not in enabled drivers build config 00:02:17.270 dma/dpaa2: not in enabled drivers build config 00:02:17.270 dma/hisilicon: not in enabled drivers build config 00:02:17.270 dma/idxd: not in enabled drivers build config 00:02:17.270 dma/ioat: not in enabled drivers build config 00:02:17.270 dma/skeleton: not in enabled drivers build config 00:02:17.270 net/af_packet: not in enabled drivers build config 00:02:17.270 net/af_xdp: not in enabled drivers build config 00:02:17.270 net/ark: not in enabled drivers build config 00:02:17.270 net/atlantic: not in enabled drivers build config 00:02:17.270 net/avp: not in enabled drivers build config 00:02:17.270 net/axgbe: not in enabled drivers build config 00:02:17.270 net/bnx2x: not in enabled drivers build config 00:02:17.270 net/bnxt: not in enabled drivers build config 00:02:17.270 net/bonding: not in enabled drivers build config 00:02:17.270 net/cnxk: not in enabled drivers build config 00:02:17.270 net/cpfl: not in enabled drivers build config 00:02:17.270 net/cxgbe: not in enabled drivers build config 00:02:17.270 net/dpaa: not in enabled drivers build config 00:02:17.270 net/dpaa2: not in enabled drivers build config 00:02:17.270 net/e1000: not in enabled drivers build config 00:02:17.270 net/ena: not in enabled drivers build config 00:02:17.270 net/enetc: not in enabled drivers build config 00:02:17.270 net/enetfec: not in enabled drivers build config 00:02:17.270 net/enic: not in enabled drivers build config 00:02:17.270 net/failsafe: not in enabled drivers build config 00:02:17.270 net/fm10k: not in enabled drivers build config 00:02:17.270 net/gve: not in enabled drivers build config 00:02:17.270 net/hinic: not in enabled drivers build config 00:02:17.270 net/hns3: not in enabled drivers build config 00:02:17.270 net/i40e: not in enabled drivers build config 00:02:17.270 net/iavf: not in enabled drivers build config 00:02:17.270 net/ice: not in enabled drivers build config 00:02:17.270 net/idpf: not in enabled drivers build config 00:02:17.270 net/igc: not in enabled drivers build config 00:02:17.270 net/ionic: not in enabled drivers build config 00:02:17.270 net/ipn3ke: not in enabled drivers build config 00:02:17.270 net/ixgbe: not in enabled drivers build config 00:02:17.270 net/mana: not in enabled drivers build config 00:02:17.270 net/memif: not in enabled drivers build config 00:02:17.270 net/mlx4: not in enabled drivers build config 00:02:17.270 net/mlx5: not in enabled drivers build config 00:02:17.270 net/mvneta: not in enabled drivers build config 00:02:17.270 net/mvpp2: not in enabled drivers build config 00:02:17.270 net/netvsc: not in enabled drivers build config 00:02:17.270 net/nfb: not in enabled drivers build config 00:02:17.270 net/nfp: not in enabled drivers build config 00:02:17.270 net/ngbe: not in enabled drivers build config 00:02:17.270 net/null: not in enabled drivers build config 00:02:17.270 net/octeontx: not in enabled drivers build config 00:02:17.270 net/octeon_ep: not in enabled drivers build config 00:02:17.270 net/pcap: not in enabled drivers build config 00:02:17.270 net/pfe: not in enabled drivers build config 00:02:17.270 net/qede: not in enabled drivers build config 00:02:17.270 net/ring: not in enabled drivers build config 00:02:17.270 net/sfc: not in enabled drivers build config 00:02:17.270 net/softnic: not in enabled drivers build config 00:02:17.270 net/tap: not in enabled drivers build config 00:02:17.270 net/thunderx: not in enabled drivers build config 00:02:17.270 net/txgbe: not in enabled drivers build config 00:02:17.270 net/vdev_netvsc: not in enabled drivers build config 00:02:17.270 net/vhost: not in enabled drivers build config 00:02:17.270 net/virtio: not in enabled drivers build config 00:02:17.270 net/vmxnet3: not in enabled drivers build config 00:02:17.270 raw/*: missing internal dependency, "rawdev" 00:02:17.270 crypto/armv8: not in enabled drivers build config 00:02:17.270 crypto/bcmfs: not in enabled drivers build config 00:02:17.270 crypto/caam_jr: not in enabled drivers build config 00:02:17.270 crypto/ccp: not in enabled drivers build config 00:02:17.270 crypto/cnxk: not in enabled drivers build config 00:02:17.270 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.270 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.270 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.270 crypto/mlx5: not in enabled drivers build config 00:02:17.270 crypto/mvsam: not in enabled drivers build config 00:02:17.270 crypto/nitrox: not in enabled drivers build config 00:02:17.270 crypto/null: not in enabled drivers build config 00:02:17.270 crypto/octeontx: not in enabled drivers build config 00:02:17.270 crypto/openssl: not in enabled drivers build config 00:02:17.270 crypto/scheduler: not in enabled drivers build config 00:02:17.270 crypto/uadk: not in enabled drivers build config 00:02:17.270 crypto/virtio: not in enabled drivers build config 00:02:17.270 compress/isal: not in enabled drivers build config 00:02:17.270 compress/mlx5: not in enabled drivers build config 00:02:17.270 compress/nitrox: not in enabled drivers build config 00:02:17.270 compress/octeontx: not in enabled drivers build config 00:02:17.270 compress/zlib: not in enabled drivers build config 00:02:17.270 regex/*: missing internal dependency, "regexdev" 00:02:17.270 ml/*: missing internal dependency, "mldev" 00:02:17.270 vdpa/ifc: not in enabled drivers build config 00:02:17.270 vdpa/mlx5: not in enabled drivers build config 00:02:17.270 vdpa/nfp: not in enabled drivers build config 00:02:17.270 vdpa/sfc: not in enabled drivers build config 00:02:17.270 event/*: missing internal dependency, "eventdev" 00:02:17.270 baseband/*: missing internal dependency, "bbdev" 00:02:17.270 gpu/*: missing internal dependency, "gpudev" 00:02:17.270 00:02:17.270 00:02:17.270 Build targets in project: 85 00:02:17.270 00:02:17.270 DPDK 24.03.0 00:02:17.270 00:02:17.270 User defined options 00:02:17.270 buildtype : debug 00:02:17.270 default_library : shared 00:02:17.270 libdir : lib 00:02:17.270 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:17.270 b_sanitize : address 00:02:17.270 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:17.270 c_link_args : 00:02:17.270 cpu_instruction_set: native 00:02:17.270 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:17.270 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:17.270 enable_docs : false 00:02:17.270 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:17.270 enable_kmods : false 00:02:17.270 max_lcores : 128 00:02:17.270 tests : false 00:02:17.270 00:02:17.270 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.270 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:17.270 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.270 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:17.270 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.270 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.270 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.270 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.270 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:17.270 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.270 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.534 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:17.534 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.534 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:17.534 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.534 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.534 [15/268] Linking static target lib/librte_kvargs.a 00:02:17.534 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:17.534 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.534 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.534 [19/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.534 [20/268] Linking static target lib/librte_log.a 00:02:17.534 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.534 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.534 [23/268] Linking static target lib/librte_pci.a 00:02:17.534 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.534 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.534 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.534 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.534 [28/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:17.534 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.534 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.534 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.797 [32/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:17.797 [33/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.797 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:17.797 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:17.797 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.797 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.797 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.797 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:17.797 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:17.797 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:17.797 [42/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.797 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:17.797 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.797 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:17.797 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:17.797 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:17.797 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:17.797 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:17.797 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:17.797 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:17.797 [52/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:17.797 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.797 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:17.797 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:17.797 [56/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:17.797 [57/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.797 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:17.797 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.797 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:17.797 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:17.797 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:17.797 [63/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.797 [64/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:17.797 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.056 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.056 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:18.056 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:18.056 [69/268] Linking static target lib/librte_meter.a 00:02:18.056 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.056 [71/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.056 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:18.056 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.056 [74/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.056 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.056 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.056 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.056 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.056 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:18.056 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:18.056 [81/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:18.056 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.056 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.056 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.056 [85/268] Linking static target lib/librte_ring.a 00:02:18.056 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.056 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.056 [88/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.056 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:18.056 [90/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.056 [91/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.056 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.056 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:18.056 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.056 [95/268] Linking static target lib/librte_telemetry.a 00:02:18.056 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.056 [97/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.057 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.057 [99/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:18.057 [100/268] Linking static target lib/librte_cmdline.a 00:02:18.057 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:18.057 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.057 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.057 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.057 [105/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:18.057 [106/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:18.057 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.057 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:18.057 [109/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.057 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.057 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.057 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.057 [113/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:18.057 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.057 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:18.057 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.057 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:18.057 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:18.057 [119/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.057 [120/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.057 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:18.057 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.057 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:18.057 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:18.057 [125/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.057 [126/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.057 [127/268] Linking static target lib/librte_timer.a 00:02:18.057 [128/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.057 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.057 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.057 [131/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.057 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:18.057 [133/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.057 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.057 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.057 [136/268] Linking static target lib/librte_net.a 00:02:18.057 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.057 [138/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.316 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.316 [140/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:18.316 [141/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:18.316 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:18.316 [143/268] Linking static target lib/librte_mempool.a 00:02:18.316 [144/268] Linking static target lib/librte_eal.a 00:02:18.316 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:18.316 [146/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.316 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.316 [148/268] Linking static target lib/librte_compressdev.a 00:02:18.316 [149/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:18.316 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:18.316 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:18.316 [152/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.316 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.316 [154/268] Linking static target lib/librte_dmadev.a 00:02:18.316 [155/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.316 [156/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.316 [157/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.316 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.316 [159/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:18.316 [160/268] Linking target lib/librte_log.so.24.1 00:02:18.316 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.316 [162/268] Linking static target lib/librte_rcu.a 00:02:18.316 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:18.316 [164/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:18.316 [165/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:18.316 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.316 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.316 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:18.316 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.316 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:18.316 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:18.316 [172/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:18.316 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.575 [174/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:18.575 [175/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:18.575 [176/268] Linking static target lib/librte_reorder.a 00:02:18.575 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:18.575 [178/268] Linking static target lib/librte_power.a 00:02:18.575 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:18.575 [180/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.575 [181/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.575 [182/268] Linking target lib/librte_kvargs.so.24.1 00:02:18.575 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.575 [184/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.575 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.575 [186/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:18.575 [187/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.575 [188/268] Linking target lib/librte_telemetry.so.24.1 00:02:18.575 [189/268] Linking static target lib/librte_mbuf.a 00:02:18.575 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.575 [191/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.575 [192/268] Linking static target drivers/librte_bus_vdev.a 00:02:18.575 [193/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.575 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.575 [195/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.575 [196/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:18.575 [197/268] Linking static target lib/librte_security.a 00:02:18.575 [198/268] Linking static target lib/librte_hash.a 00:02:18.834 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:18.834 [200/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:18.834 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:18.834 [202/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.834 [203/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.834 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:18.834 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.834 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.834 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.834 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:18.834 [209/268] Linking static target drivers/librte_mempool_ring.a 00:02:18.834 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.093 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.093 [212/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.093 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:19.093 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.093 [215/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.093 [216/268] Linking static target lib/librte_cryptodev.a 00:02:19.093 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.093 [218/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.352 [219/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.352 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.352 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.352 [222/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.612 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.612 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.612 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:19.871 [226/268] Linking static target lib/librte_ethdev.a 00:02:20.808 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.066 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.597 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:23.597 [230/268] Linking static target lib/librte_vhost.a 00:02:25.501 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.784 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.320 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.320 [234/268] Linking target lib/librte_eal.so.24.1 00:02:31.320 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:31.320 [236/268] Linking target lib/librte_meter.so.24.1 00:02:31.320 [237/268] Linking target lib/librte_timer.so.24.1 00:02:31.320 [238/268] Linking target lib/librte_ring.so.24.1 00:02:31.320 [239/268] Linking target lib/librte_pci.so.24.1 00:02:31.320 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:31.320 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:31.320 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:31.320 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:31.320 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:31.320 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:31.320 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:31.320 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:31.320 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:31.320 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:31.579 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:31.579 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:31.579 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:31.579 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:31.838 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:31.838 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:31.838 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:31.838 [257/268] Linking target lib/librte_net.so.24.1 00:02:31.838 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:31.838 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:31.838 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:31.838 [261/268] Linking target lib/librte_security.so.24.1 00:02:31.838 [262/268] Linking target lib/librte_hash.so.24.1 00:02:31.838 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:32.097 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:32.097 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:32.097 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:32.097 [267/268] Linking target lib/librte_power.so.24.1 00:02:32.097 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:32.097 INFO: autodetecting backend as ninja 00:02:32.097 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:37.373 CC lib/log/log.o 00:02:37.373 CC lib/log/log_flags.o 00:02:37.373 CC lib/log/log_deprecated.o 00:02:37.373 CC lib/ut_mock/mock.o 00:02:37.373 CC lib/ut/ut.o 00:02:37.373 LIB libspdk_log.a 00:02:37.373 LIB libspdk_ut_mock.a 00:02:37.373 LIB libspdk_ut.a 00:02:37.373 SO libspdk_log.so.7.1 00:02:37.373 SO libspdk_ut.so.2.0 00:02:37.373 SO libspdk_ut_mock.so.6.0 00:02:37.373 SYMLINK libspdk_log.so 00:02:37.374 SYMLINK libspdk_ut.so 00:02:37.633 SYMLINK libspdk_ut_mock.so 00:02:37.893 CXX lib/trace_parser/trace.o 00:02:37.893 CC lib/dma/dma.o 00:02:37.893 CC lib/ioat/ioat.o 00:02:37.893 CC lib/util/bit_array.o 00:02:37.893 CC lib/util/base64.o 00:02:37.893 CC lib/util/cpuset.o 00:02:37.893 CC lib/util/crc16.o 00:02:37.893 CC lib/util/crc32.o 00:02:37.893 CC lib/util/crc64.o 00:02:37.893 CC lib/util/crc32c.o 00:02:37.893 CC lib/util/crc32_ieee.o 00:02:37.893 CC lib/util/fd_group.o 00:02:37.893 CC lib/util/dif.o 00:02:37.893 CC lib/util/fd.o 00:02:37.893 CC lib/util/file.o 00:02:37.893 CC lib/util/hexlify.o 00:02:37.893 CC lib/util/iov.o 00:02:37.893 CC lib/util/math.o 00:02:37.893 CC lib/util/net.o 00:02:37.893 CC lib/util/pipe.o 00:02:37.893 CC lib/util/strerror_tls.o 00:02:37.893 CC lib/util/string.o 00:02:37.893 CC lib/util/uuid.o 00:02:37.893 CC lib/util/xor.o 00:02:37.893 CC lib/util/zipf.o 00:02:37.893 CC lib/util/md5.o 00:02:37.893 CC lib/vfio_user/host/vfio_user_pci.o 00:02:37.893 CC lib/vfio_user/host/vfio_user.o 00:02:38.152 LIB libspdk_dma.a 00:02:38.152 SO libspdk_dma.so.5.0 00:02:38.152 SYMLINK libspdk_dma.so 00:02:38.152 LIB libspdk_ioat.a 00:02:38.152 SO libspdk_ioat.so.7.0 00:02:38.152 SYMLINK libspdk_ioat.so 00:02:38.152 LIB libspdk_vfio_user.a 00:02:38.152 SO libspdk_vfio_user.so.5.0 00:02:38.411 SYMLINK libspdk_vfio_user.so 00:02:38.411 LIB libspdk_util.a 00:02:38.411 SO libspdk_util.so.10.1 00:02:38.670 LIB libspdk_trace_parser.a 00:02:38.670 SO libspdk_trace_parser.so.6.0 00:02:38.670 SYMLINK libspdk_util.so 00:02:38.670 SYMLINK libspdk_trace_parser.so 00:02:38.930 CC lib/vmd/vmd.o 00:02:38.930 CC lib/vmd/led.o 00:02:38.930 CC lib/idxd/idxd.o 00:02:38.930 CC lib/idxd/idxd_user.o 00:02:38.930 CC lib/idxd/idxd_kernel.o 00:02:38.930 CC lib/env_dpdk/env.o 00:02:38.930 CC lib/env_dpdk/memory.o 00:02:38.930 CC lib/env_dpdk/pci.o 00:02:38.930 CC lib/env_dpdk/init.o 00:02:38.930 CC lib/env_dpdk/threads.o 00:02:38.930 CC lib/env_dpdk/pci_ioat.o 00:02:38.930 CC lib/env_dpdk/pci_virtio.o 00:02:38.930 CC lib/env_dpdk/pci_vmd.o 00:02:38.930 CC lib/env_dpdk/pci_idxd.o 00:02:38.930 CC lib/env_dpdk/pci_dpdk.o 00:02:38.930 CC lib/env_dpdk/sigbus_handler.o 00:02:38.930 CC lib/env_dpdk/pci_event.o 00:02:38.930 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:38.930 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:38.930 CC lib/json/json_parse.o 00:02:38.930 CC lib/rdma_utils/rdma_utils.o 00:02:38.930 CC lib/json/json_util.o 00:02:38.930 CC lib/json/json_write.o 00:02:38.930 CC lib/conf/conf.o 00:02:39.225 LIB libspdk_conf.a 00:02:39.225 LIB libspdk_rdma_utils.a 00:02:39.225 SO libspdk_conf.so.6.0 00:02:39.225 SO libspdk_rdma_utils.so.1.0 00:02:39.225 LIB libspdk_json.a 00:02:39.484 SYMLINK libspdk_conf.so 00:02:39.484 SO libspdk_json.so.6.0 00:02:39.484 SYMLINK libspdk_rdma_utils.so 00:02:39.484 SYMLINK libspdk_json.so 00:02:39.484 LIB libspdk_idxd.a 00:02:39.743 SO libspdk_idxd.so.12.1 00:02:39.743 LIB libspdk_vmd.a 00:02:39.743 SO libspdk_vmd.so.6.0 00:02:39.743 SYMLINK libspdk_idxd.so 00:02:39.743 CC lib/rdma_provider/common.o 00:02:39.743 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:39.743 CC lib/jsonrpc/jsonrpc_server.o 00:02:39.743 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:39.743 CC lib/jsonrpc/jsonrpc_client.o 00:02:39.743 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:39.743 SYMLINK libspdk_vmd.so 00:02:40.002 LIB libspdk_rdma_provider.a 00:02:40.002 SO libspdk_rdma_provider.so.7.0 00:02:40.002 LIB libspdk_jsonrpc.a 00:02:40.002 SYMLINK libspdk_rdma_provider.so 00:02:40.002 SO libspdk_jsonrpc.so.6.0 00:02:40.002 SYMLINK libspdk_jsonrpc.so 00:02:40.262 LIB libspdk_env_dpdk.a 00:02:40.522 SO libspdk_env_dpdk.so.15.1 00:02:40.522 CC lib/rpc/rpc.o 00:02:40.522 SYMLINK libspdk_env_dpdk.so 00:02:40.781 LIB libspdk_rpc.a 00:02:40.781 SO libspdk_rpc.so.6.0 00:02:40.781 SYMLINK libspdk_rpc.so 00:02:41.041 CC lib/keyring/keyring.o 00:02:41.041 CC lib/keyring/keyring_rpc.o 00:02:41.041 CC lib/trace/trace.o 00:02:41.041 CC lib/trace/trace_flags.o 00:02:41.041 CC lib/trace/trace_rpc.o 00:02:41.041 CC lib/notify/notify.o 00:02:41.041 CC lib/notify/notify_rpc.o 00:02:41.300 LIB libspdk_notify.a 00:02:41.300 LIB libspdk_keyring.a 00:02:41.300 SO libspdk_notify.so.6.0 00:02:41.300 LIB libspdk_trace.a 00:02:41.300 SO libspdk_keyring.so.2.0 00:02:41.300 SYMLINK libspdk_notify.so 00:02:41.300 SO libspdk_trace.so.11.0 00:02:41.300 SYMLINK libspdk_keyring.so 00:02:41.558 SYMLINK libspdk_trace.so 00:02:41.818 CC lib/thread/thread.o 00:02:41.818 CC lib/thread/iobuf.o 00:02:41.818 CC lib/sock/sock.o 00:02:41.818 CC lib/sock/sock_rpc.o 00:02:42.386 LIB libspdk_sock.a 00:02:42.386 SO libspdk_sock.so.10.0 00:02:42.386 SYMLINK libspdk_sock.so 00:02:42.646 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.646 CC lib/nvme/nvme_ctrlr.o 00:02:42.646 CC lib/nvme/nvme_fabric.o 00:02:42.646 CC lib/nvme/nvme_ns_cmd.o 00:02:42.646 CC lib/nvme/nvme_pcie.o 00:02:42.646 CC lib/nvme/nvme_ns.o 00:02:42.646 CC lib/nvme/nvme_pcie_common.o 00:02:42.646 CC lib/nvme/nvme_qpair.o 00:02:42.646 CC lib/nvme/nvme.o 00:02:42.646 CC lib/nvme/nvme_quirks.o 00:02:42.646 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.646 CC lib/nvme/nvme_transport.o 00:02:42.646 CC lib/nvme/nvme_discovery.o 00:02:42.646 CC lib/nvme/nvme_tcp.o 00:02:42.646 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.646 CC lib/nvme/nvme_opal.o 00:02:42.646 CC lib/nvme/nvme_io_msg.o 00:02:42.646 CC lib/nvme/nvme_poll_group.o 00:02:42.646 CC lib/nvme/nvme_zns.o 00:02:42.646 CC lib/nvme/nvme_stubs.o 00:02:42.646 CC lib/nvme/nvme_auth.o 00:02:42.646 CC lib/nvme/nvme_cuse.o 00:02:42.646 CC lib/nvme/nvme_rdma.o 00:02:43.214 LIB libspdk_thread.a 00:02:43.214 SO libspdk_thread.so.11.0 00:02:43.473 SYMLINK libspdk_thread.so 00:02:43.732 CC lib/virtio/virtio.o 00:02:43.732 CC lib/virtio/virtio_vhost_user.o 00:02:43.732 CC lib/virtio/virtio_vfio_user.o 00:02:43.732 CC lib/virtio/virtio_pci.o 00:02:43.732 CC lib/blob/request.o 00:02:43.732 CC lib/blob/blobstore.o 00:02:43.732 CC lib/blob/blob_bs_dev.o 00:02:43.732 CC lib/blob/zeroes.o 00:02:43.732 CC lib/accel/accel.o 00:02:43.732 CC lib/init/json_config.o 00:02:43.732 CC lib/accel/accel_sw.o 00:02:43.732 CC lib/init/subsystem.o 00:02:43.732 CC lib/accel/accel_rpc.o 00:02:43.732 CC lib/init/subsystem_rpc.o 00:02:43.732 CC lib/init/rpc.o 00:02:43.732 CC lib/fsdev/fsdev.o 00:02:43.732 CC lib/fsdev/fsdev_io.o 00:02:43.732 CC lib/fsdev/fsdev_rpc.o 00:02:43.990 LIB libspdk_init.a 00:02:43.990 SO libspdk_init.so.6.0 00:02:43.990 LIB libspdk_virtio.a 00:02:43.990 SYMLINK libspdk_init.so 00:02:43.990 SO libspdk_virtio.so.7.0 00:02:43.990 SYMLINK libspdk_virtio.so 00:02:44.249 LIB libspdk_fsdev.a 00:02:44.249 CC lib/event/app.o 00:02:44.249 CC lib/event/reactor.o 00:02:44.249 CC lib/event/log_rpc.o 00:02:44.249 CC lib/event/app_rpc.o 00:02:44.249 CC lib/event/scheduler_static.o 00:02:44.249 SO libspdk_fsdev.so.2.0 00:02:44.508 SYMLINK libspdk_fsdev.so 00:02:44.767 LIB libspdk_accel.a 00:02:44.767 LIB libspdk_nvme.a 00:02:44.767 SO libspdk_accel.so.16.0 00:02:44.767 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:44.767 LIB libspdk_event.a 00:02:44.767 SO libspdk_nvme.so.15.0 00:02:44.767 SO libspdk_event.so.14.0 00:02:44.767 SYMLINK libspdk_accel.so 00:02:45.026 SYMLINK libspdk_event.so 00:02:45.026 SYMLINK libspdk_nvme.so 00:02:45.286 CC lib/bdev/bdev.o 00:02:45.286 CC lib/bdev/bdev_rpc.o 00:02:45.286 CC lib/bdev/bdev_zone.o 00:02:45.286 CC lib/bdev/part.o 00:02:45.286 CC lib/bdev/scsi_nvme.o 00:02:45.286 LIB libspdk_fuse_dispatcher.a 00:02:45.546 SO libspdk_fuse_dispatcher.so.1.0 00:02:45.546 SYMLINK libspdk_fuse_dispatcher.so 00:02:46.924 LIB libspdk_blob.a 00:02:46.924 SO libspdk_blob.so.12.0 00:02:46.924 SYMLINK libspdk_blob.so 00:02:47.183 CC lib/blobfs/blobfs.o 00:02:47.183 CC lib/blobfs/tree.o 00:02:47.183 CC lib/lvol/lvol.o 00:02:47.752 LIB libspdk_bdev.a 00:02:47.752 SO libspdk_bdev.so.17.0 00:02:47.752 SYMLINK libspdk_bdev.so 00:02:47.752 LIB libspdk_blobfs.a 00:02:48.012 SO libspdk_blobfs.so.11.0 00:02:48.012 SYMLINK libspdk_blobfs.so 00:02:48.012 LIB libspdk_lvol.a 00:02:48.012 SO libspdk_lvol.so.11.0 00:02:48.012 SYMLINK libspdk_lvol.so 00:02:48.271 CC lib/nvmf/ctrlr_bdev.o 00:02:48.271 CC lib/nvmf/ctrlr.o 00:02:48.271 CC lib/nvmf/subsystem.o 00:02:48.271 CC lib/nvmf/ctrlr_discovery.o 00:02:48.271 CC lib/nvmf/nvmf_rpc.o 00:02:48.271 CC lib/nvmf/nvmf.o 00:02:48.271 CC lib/nvmf/transport.o 00:02:48.271 CC lib/nvmf/tcp.o 00:02:48.271 CC lib/nvmf/stubs.o 00:02:48.271 CC lib/nvmf/mdns_server.o 00:02:48.271 CC lib/nvmf/rdma.o 00:02:48.271 CC lib/nvmf/auth.o 00:02:48.271 CC lib/ftl/ftl_core.o 00:02:48.271 CC lib/ftl/ftl_init.o 00:02:48.271 CC lib/ftl/ftl_layout.o 00:02:48.271 CC lib/ublk/ublk.o 00:02:48.271 CC lib/ftl/ftl_debug.o 00:02:48.271 CC lib/ftl/ftl_io.o 00:02:48.271 CC lib/ublk/ublk_rpc.o 00:02:48.271 CC lib/ftl/ftl_sb.o 00:02:48.271 CC lib/ftl/ftl_l2p_flat.o 00:02:48.271 CC lib/ftl/ftl_l2p.o 00:02:48.271 CC lib/ftl/ftl_band.o 00:02:48.271 CC lib/ftl/ftl_nv_cache.o 00:02:48.271 CC lib/ftl/ftl_rq.o 00:02:48.271 CC lib/ftl/ftl_band_ops.o 00:02:48.271 CC lib/ftl/ftl_writer.o 00:02:48.271 CC lib/ftl/ftl_reloc.o 00:02:48.271 CC lib/ftl/ftl_l2p_cache.o 00:02:48.271 CC lib/ftl/ftl_p2l.o 00:02:48.271 CC lib/ftl/ftl_p2l_log.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:48.271 CC lib/scsi/dev.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:48.271 CC lib/scsi/lun.o 00:02:48.271 CC lib/scsi/port.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:48.271 CC lib/scsi/scsi.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:48.271 CC lib/scsi/scsi_bdev.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:48.271 CC lib/scsi/scsi_pr.o 00:02:48.271 CC lib/scsi/scsi_rpc.o 00:02:48.271 CC lib/scsi/task.o 00:02:48.271 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:48.271 CC lib/ftl/utils/ftl_conf.o 00:02:48.271 CC lib/ftl/utils/ftl_md.o 00:02:48.271 CC lib/nbd/nbd.o 00:02:48.271 CC lib/nbd/nbd_rpc.o 00:02:48.271 CC lib/ftl/utils/ftl_bitmap.o 00:02:48.271 CC lib/ftl/utils/ftl_mempool.o 00:02:48.271 CC lib/ftl/utils/ftl_property.o 00:02:48.271 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:48.271 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:48.271 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:48.271 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:48.271 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:48.271 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:48.271 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:48.271 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:48.271 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:48.271 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:48.271 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:48.271 CC lib/ftl/base/ftl_base_dev.o 00:02:48.271 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:48.271 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:48.271 CC lib/ftl/base/ftl_base_bdev.o 00:02:48.271 CC lib/ftl/ftl_trace.o 00:02:48.838 LIB libspdk_nbd.a 00:02:48.838 SO libspdk_nbd.so.7.0 00:02:48.838 SYMLINK libspdk_nbd.so 00:02:48.838 LIB libspdk_scsi.a 00:02:48.838 LIB libspdk_ublk.a 00:02:49.096 SO libspdk_ublk.so.3.0 00:02:49.096 SO libspdk_scsi.so.9.0 00:02:49.096 SYMLINK libspdk_ublk.so 00:02:49.096 SYMLINK libspdk_scsi.so 00:02:49.356 LIB libspdk_ftl.a 00:02:49.356 CC lib/vhost/vhost.o 00:02:49.356 CC lib/vhost/vhost_rpc.o 00:02:49.356 CC lib/vhost/rte_vhost_user.o 00:02:49.356 CC lib/vhost/vhost_scsi.o 00:02:49.356 CC lib/vhost/vhost_blk.o 00:02:49.356 CC lib/iscsi/conn.o 00:02:49.356 CC lib/iscsi/init_grp.o 00:02:49.356 CC lib/iscsi/iscsi.o 00:02:49.356 CC lib/iscsi/param.o 00:02:49.356 CC lib/iscsi/portal_grp.o 00:02:49.356 CC lib/iscsi/tgt_node.o 00:02:49.356 CC lib/iscsi/iscsi_subsystem.o 00:02:49.356 CC lib/iscsi/iscsi_rpc.o 00:02:49.356 CC lib/iscsi/task.o 00:02:49.615 SO libspdk_ftl.so.9.0 00:02:49.875 SYMLINK libspdk_ftl.so 00:02:50.443 LIB libspdk_vhost.a 00:02:50.443 LIB libspdk_nvmf.a 00:02:50.443 SO libspdk_vhost.so.8.0 00:02:50.443 SO libspdk_nvmf.so.20.0 00:02:50.443 SYMLINK libspdk_vhost.so 00:02:50.703 SYMLINK libspdk_nvmf.so 00:02:50.703 LIB libspdk_iscsi.a 00:02:50.703 SO libspdk_iscsi.so.8.0 00:02:50.963 SYMLINK libspdk_iscsi.so 00:02:51.532 CC module/env_dpdk/env_dpdk_rpc.o 00:02:51.532 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:51.532 CC module/blob/bdev/blob_bdev.o 00:02:51.532 CC module/keyring/file/keyring.o 00:02:51.532 CC module/keyring/file/keyring_rpc.o 00:02:51.532 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:51.532 LIB libspdk_env_dpdk_rpc.a 00:02:51.532 CC module/fsdev/aio/linux_aio_mgr.o 00:02:51.532 CC module/fsdev/aio/fsdev_aio.o 00:02:51.532 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:51.532 CC module/accel/iaa/accel_iaa.o 00:02:51.532 CC module/sock/posix/posix.o 00:02:51.532 CC module/scheduler/gscheduler/gscheduler.o 00:02:51.532 CC module/accel/iaa/accel_iaa_rpc.o 00:02:51.532 CC module/accel/ioat/accel_ioat.o 00:02:51.532 CC module/accel/ioat/accel_ioat_rpc.o 00:02:51.532 CC module/accel/dsa/accel_dsa_rpc.o 00:02:51.532 CC module/accel/dsa/accel_dsa.o 00:02:51.532 CC module/keyring/linux/keyring.o 00:02:51.532 CC module/keyring/linux/keyring_rpc.o 00:02:51.532 CC module/accel/error/accel_error.o 00:02:51.532 CC module/accel/error/accel_error_rpc.o 00:02:51.532 SO libspdk_env_dpdk_rpc.so.6.0 00:02:51.791 SYMLINK libspdk_env_dpdk_rpc.so 00:02:51.791 LIB libspdk_keyring_file.a 00:02:51.791 LIB libspdk_scheduler_dpdk_governor.a 00:02:51.791 SO libspdk_keyring_file.so.2.0 00:02:51.791 LIB libspdk_scheduler_gscheduler.a 00:02:51.791 LIB libspdk_keyring_linux.a 00:02:51.791 LIB libspdk_accel_ioat.a 00:02:51.791 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:51.791 SO libspdk_scheduler_gscheduler.so.4.0 00:02:51.791 LIB libspdk_scheduler_dynamic.a 00:02:51.791 SO libspdk_keyring_linux.so.1.0 00:02:51.791 SO libspdk_accel_ioat.so.6.0 00:02:51.791 LIB libspdk_accel_iaa.a 00:02:51.791 SYMLINK libspdk_keyring_file.so 00:02:51.791 SO libspdk_scheduler_dynamic.so.4.0 00:02:51.791 LIB libspdk_accel_error.a 00:02:51.791 LIB libspdk_blob_bdev.a 00:02:51.791 SYMLINK libspdk_scheduler_gscheduler.so 00:02:51.791 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:51.791 SO libspdk_accel_error.so.2.0 00:02:51.791 SO libspdk_accel_iaa.so.3.0 00:02:51.791 SYMLINK libspdk_keyring_linux.so 00:02:51.791 SO libspdk_blob_bdev.so.12.0 00:02:51.791 LIB libspdk_accel_dsa.a 00:02:51.791 SYMLINK libspdk_accel_ioat.so 00:02:51.791 SYMLINK libspdk_scheduler_dynamic.so 00:02:52.050 SO libspdk_accel_dsa.so.5.0 00:02:52.050 SYMLINK libspdk_accel_iaa.so 00:02:52.050 SYMLINK libspdk_accel_error.so 00:02:52.050 SYMLINK libspdk_blob_bdev.so 00:02:52.050 SYMLINK libspdk_accel_dsa.so 00:02:52.311 LIB libspdk_fsdev_aio.a 00:02:52.311 SO libspdk_fsdev_aio.so.1.0 00:02:52.311 LIB libspdk_sock_posix.a 00:02:52.311 SO libspdk_sock_posix.so.6.0 00:02:52.311 SYMLINK libspdk_fsdev_aio.so 00:02:52.311 CC module/bdev/delay/vbdev_delay.o 00:02:52.311 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:52.311 CC module/bdev/lvol/vbdev_lvol.o 00:02:52.311 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:52.311 CC module/bdev/error/vbdev_error.o 00:02:52.311 CC module/bdev/error/vbdev_error_rpc.o 00:02:52.311 CC module/bdev/null/bdev_null.o 00:02:52.311 CC module/bdev/null/bdev_null_rpc.o 00:02:52.311 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:52.311 CC module/blobfs/bdev/blobfs_bdev.o 00:02:52.311 CC module/bdev/aio/bdev_aio_rpc.o 00:02:52.311 CC module/bdev/aio/bdev_aio.o 00:02:52.569 CC module/bdev/ftl/bdev_ftl.o 00:02:52.569 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:52.569 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:52.569 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:52.569 CC module/bdev/passthru/vbdev_passthru.o 00:02:52.569 CC module/bdev/nvme/bdev_nvme.o 00:02:52.569 CC module/bdev/iscsi/bdev_iscsi.o 00:02:52.569 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:52.569 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:52.569 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:52.569 CC module/bdev/malloc/bdev_malloc.o 00:02:52.569 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:52.570 CC module/bdev/nvme/bdev_mdns_client.o 00:02:52.570 CC module/bdev/nvme/nvme_rpc.o 00:02:52.570 CC module/bdev/raid/bdev_raid.o 00:02:52.570 CC module/bdev/gpt/vbdev_gpt.o 00:02:52.570 CC module/bdev/raid/raid0.o 00:02:52.570 CC module/bdev/nvme/vbdev_opal.o 00:02:52.570 CC module/bdev/raid/bdev_raid_rpc.o 00:02:52.570 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:52.570 CC module/bdev/gpt/gpt.o 00:02:52.570 CC module/bdev/raid/bdev_raid_sb.o 00:02:52.570 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:52.570 CC module/bdev/raid/raid1.o 00:02:52.570 CC module/bdev/split/vbdev_split.o 00:02:52.570 CC module/bdev/raid/concat.o 00:02:52.570 CC module/bdev/split/vbdev_split_rpc.o 00:02:52.570 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:52.570 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:52.570 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:52.570 SYMLINK libspdk_sock_posix.so 00:02:52.570 LIB libspdk_blobfs_bdev.a 00:02:52.570 SO libspdk_blobfs_bdev.so.6.0 00:02:52.829 LIB libspdk_bdev_split.a 00:02:52.829 SO libspdk_bdev_split.so.6.0 00:02:52.829 SYMLINK libspdk_blobfs_bdev.so 00:02:52.829 LIB libspdk_bdev_null.a 00:02:52.829 LIB libspdk_bdev_error.a 00:02:52.829 SO libspdk_bdev_null.so.6.0 00:02:52.829 LIB libspdk_bdev_ftl.a 00:02:52.829 LIB libspdk_bdev_gpt.a 00:02:52.829 SO libspdk_bdev_error.so.6.0 00:02:52.829 LIB libspdk_bdev_passthru.a 00:02:52.829 SYMLINK libspdk_bdev_split.so 00:02:52.829 LIB libspdk_bdev_aio.a 00:02:52.829 SO libspdk_bdev_ftl.so.6.0 00:02:52.829 SO libspdk_bdev_passthru.so.6.0 00:02:52.829 SO libspdk_bdev_gpt.so.6.0 00:02:52.829 LIB libspdk_bdev_zone_block.a 00:02:52.829 LIB libspdk_bdev_delay.a 00:02:52.829 SYMLINK libspdk_bdev_null.so 00:02:52.829 LIB libspdk_bdev_iscsi.a 00:02:52.829 SO libspdk_bdev_aio.so.6.0 00:02:52.829 SYMLINK libspdk_bdev_error.so 00:02:52.829 SO libspdk_bdev_iscsi.so.6.0 00:02:52.829 LIB libspdk_bdev_malloc.a 00:02:52.829 SO libspdk_bdev_zone_block.so.6.0 00:02:52.829 SYMLINK libspdk_bdev_passthru.so 00:02:52.829 SO libspdk_bdev_delay.so.6.0 00:02:52.829 SYMLINK libspdk_bdev_ftl.so 00:02:52.829 SYMLINK libspdk_bdev_gpt.so 00:02:52.829 SYMLINK libspdk_bdev_aio.so 00:02:52.829 SO libspdk_bdev_malloc.so.6.0 00:02:53.089 SYMLINK libspdk_bdev_delay.so 00:02:53.089 SYMLINK libspdk_bdev_iscsi.so 00:02:53.089 SYMLINK libspdk_bdev_zone_block.so 00:02:53.089 LIB libspdk_bdev_lvol.a 00:02:53.089 SYMLINK libspdk_bdev_malloc.so 00:02:53.089 SO libspdk_bdev_lvol.so.6.0 00:02:53.089 LIB libspdk_bdev_virtio.a 00:02:53.089 SO libspdk_bdev_virtio.so.6.0 00:02:53.089 SYMLINK libspdk_bdev_lvol.so 00:02:53.089 SYMLINK libspdk_bdev_virtio.so 00:02:53.659 LIB libspdk_bdev_raid.a 00:02:53.659 SO libspdk_bdev_raid.so.6.0 00:02:53.659 SYMLINK libspdk_bdev_raid.so 00:02:55.039 LIB libspdk_bdev_nvme.a 00:02:55.039 SO libspdk_bdev_nvme.so.7.1 00:02:55.039 SYMLINK libspdk_bdev_nvme.so 00:02:55.608 CC module/event/subsystems/fsdev/fsdev.o 00:02:55.608 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.608 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.608 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.608 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.608 CC module/event/subsystems/sock/sock.o 00:02:55.608 CC module/event/subsystems/vmd/vmd.o 00:02:55.608 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.608 CC module/event/subsystems/keyring/keyring.o 00:02:55.867 LIB libspdk_event_fsdev.a 00:02:55.867 LIB libspdk_event_vhost_blk.a 00:02:55.867 LIB libspdk_event_scheduler.a 00:02:55.867 LIB libspdk_event_sock.a 00:02:55.867 SO libspdk_event_fsdev.so.1.0 00:02:55.867 LIB libspdk_event_keyring.a 00:02:55.867 LIB libspdk_event_vmd.a 00:02:55.867 SO libspdk_event_vhost_blk.so.3.0 00:02:55.867 LIB libspdk_event_iobuf.a 00:02:55.867 SO libspdk_event_sock.so.5.0 00:02:55.867 SO libspdk_event_scheduler.so.4.0 00:02:55.867 SO libspdk_event_keyring.so.1.0 00:02:55.867 SYMLINK libspdk_event_fsdev.so 00:02:55.867 SO libspdk_event_vmd.so.6.0 00:02:55.867 SO libspdk_event_iobuf.so.3.0 00:02:55.867 SYMLINK libspdk_event_vhost_blk.so 00:02:55.867 SYMLINK libspdk_event_sock.so 00:02:55.867 SYMLINK libspdk_event_scheduler.so 00:02:55.867 SYMLINK libspdk_event_keyring.so 00:02:55.867 SYMLINK libspdk_event_iobuf.so 00:02:55.867 SYMLINK libspdk_event_vmd.so 00:02:56.434 CC module/event/subsystems/accel/accel.o 00:02:56.435 LIB libspdk_event_accel.a 00:02:56.435 SO libspdk_event_accel.so.6.0 00:02:56.693 SYMLINK libspdk_event_accel.so 00:02:56.953 CC module/event/subsystems/bdev/bdev.o 00:02:56.953 LIB libspdk_event_bdev.a 00:02:57.212 SO libspdk_event_bdev.so.6.0 00:02:57.212 SYMLINK libspdk_event_bdev.so 00:02:57.470 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.470 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.470 CC module/event/subsystems/ublk/ublk.o 00:02:57.470 CC module/event/subsystems/scsi/scsi.o 00:02:57.470 CC module/event/subsystems/nbd/nbd.o 00:02:57.729 LIB libspdk_event_ublk.a 00:02:57.729 SO libspdk_event_ublk.so.3.0 00:02:57.729 LIB libspdk_event_nbd.a 00:02:57.729 LIB libspdk_event_scsi.a 00:02:57.729 LIB libspdk_event_nvmf.a 00:02:57.729 SO libspdk_event_nbd.so.6.0 00:02:57.729 SO libspdk_event_scsi.so.6.0 00:02:57.729 SYMLINK libspdk_event_ublk.so 00:02:57.729 SO libspdk_event_nvmf.so.6.0 00:02:57.729 SYMLINK libspdk_event_nbd.so 00:02:57.729 SYMLINK libspdk_event_scsi.so 00:02:57.729 SYMLINK libspdk_event_nvmf.so 00:02:58.296 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:58.296 CC module/event/subsystems/iscsi/iscsi.o 00:02:58.296 LIB libspdk_event_vhost_scsi.a 00:02:58.296 SO libspdk_event_vhost_scsi.so.3.0 00:02:58.296 LIB libspdk_event_iscsi.a 00:02:58.296 SYMLINK libspdk_event_vhost_scsi.so 00:02:58.296 SO libspdk_event_iscsi.so.6.0 00:02:58.296 SYMLINK libspdk_event_iscsi.so 00:02:58.555 SO libspdk.so.6.0 00:02:58.555 SYMLINK libspdk.so 00:02:58.812 CC app/trace_record/trace_record.o 00:02:59.089 CC app/spdk_nvme_discover/discovery_aer.o 00:02:59.089 CXX app/trace/trace.o 00:02:59.089 CC app/spdk_top/spdk_top.o 00:02:59.089 CC app/spdk_nvme_identify/identify.o 00:02:59.089 CC app/spdk_lspci/spdk_lspci.o 00:02:59.089 CC app/spdk_nvme_perf/perf.o 00:02:59.089 CC app/iscsi_tgt/iscsi_tgt.o 00:02:59.089 CC test/rpc_client/rpc_client_test.o 00:02:59.089 TEST_HEADER include/spdk/accel.h 00:02:59.089 TEST_HEADER include/spdk/assert.h 00:02:59.089 TEST_HEADER include/spdk/accel_module.h 00:02:59.089 TEST_HEADER include/spdk/bdev_module.h 00:02:59.089 TEST_HEADER include/spdk/barrier.h 00:02:59.089 TEST_HEADER include/spdk/base64.h 00:02:59.089 TEST_HEADER include/spdk/bdev.h 00:02:59.089 TEST_HEADER include/spdk/bit_pool.h 00:02:59.089 TEST_HEADER include/spdk/bdev_zone.h 00:02:59.089 TEST_HEADER include/spdk/blob_bdev.h 00:02:59.089 TEST_HEADER include/spdk/bit_array.h 00:02:59.089 TEST_HEADER include/spdk/blobfs.h 00:02:59.089 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:59.089 CC app/spdk_dd/spdk_dd.o 00:02:59.089 TEST_HEADER include/spdk/blob.h 00:02:59.089 TEST_HEADER include/spdk/conf.h 00:02:59.089 TEST_HEADER include/spdk/cpuset.h 00:02:59.089 TEST_HEADER include/spdk/config.h 00:02:59.089 TEST_HEADER include/spdk/crc16.h 00:02:59.089 TEST_HEADER include/spdk/crc32.h 00:02:59.089 TEST_HEADER include/spdk/crc64.h 00:02:59.089 TEST_HEADER include/spdk/dif.h 00:02:59.089 TEST_HEADER include/spdk/endian.h 00:02:59.089 TEST_HEADER include/spdk/dma.h 00:02:59.089 CC app/nvmf_tgt/nvmf_main.o 00:02:59.089 TEST_HEADER include/spdk/env_dpdk.h 00:02:59.089 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:59.089 TEST_HEADER include/spdk/env.h 00:02:59.089 TEST_HEADER include/spdk/event.h 00:02:59.089 TEST_HEADER include/spdk/fd_group.h 00:02:59.089 TEST_HEADER include/spdk/file.h 00:02:59.089 TEST_HEADER include/spdk/fd.h 00:02:59.089 TEST_HEADER include/spdk/fsdev.h 00:02:59.089 TEST_HEADER include/spdk/ftl.h 00:02:59.089 CC app/spdk_tgt/spdk_tgt.o 00:02:59.089 TEST_HEADER include/spdk/fsdev_module.h 00:02:59.089 TEST_HEADER include/spdk/gpt_spec.h 00:02:59.089 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:59.089 TEST_HEADER include/spdk/hexlify.h 00:02:59.089 TEST_HEADER include/spdk/idxd_spec.h 00:02:59.089 TEST_HEADER include/spdk/histogram_data.h 00:02:59.089 TEST_HEADER include/spdk/init.h 00:02:59.089 TEST_HEADER include/spdk/idxd.h 00:02:59.089 TEST_HEADER include/spdk/ioat_spec.h 00:02:59.089 TEST_HEADER include/spdk/ioat.h 00:02:59.089 TEST_HEADER include/spdk/iscsi_spec.h 00:02:59.089 TEST_HEADER include/spdk/json.h 00:02:59.089 TEST_HEADER include/spdk/keyring.h 00:02:59.089 TEST_HEADER include/spdk/jsonrpc.h 00:02:59.089 TEST_HEADER include/spdk/keyring_module.h 00:02:59.089 TEST_HEADER include/spdk/likely.h 00:02:59.089 TEST_HEADER include/spdk/log.h 00:02:59.089 TEST_HEADER include/spdk/md5.h 00:02:59.089 TEST_HEADER include/spdk/lvol.h 00:02:59.089 TEST_HEADER include/spdk/memory.h 00:02:59.089 TEST_HEADER include/spdk/mmio.h 00:02:59.089 TEST_HEADER include/spdk/nbd.h 00:02:59.089 TEST_HEADER include/spdk/net.h 00:02:59.089 TEST_HEADER include/spdk/notify.h 00:02:59.089 TEST_HEADER include/spdk/nvme.h 00:02:59.089 TEST_HEADER include/spdk/nvme_intel.h 00:02:59.089 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:59.089 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:59.089 TEST_HEADER include/spdk/nvme_zns.h 00:02:59.089 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:59.089 TEST_HEADER include/spdk/nvme_spec.h 00:02:59.089 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:59.089 TEST_HEADER include/spdk/nvmf.h 00:02:59.089 TEST_HEADER include/spdk/nvmf_spec.h 00:02:59.089 TEST_HEADER include/spdk/nvmf_transport.h 00:02:59.089 TEST_HEADER include/spdk/opal_spec.h 00:02:59.089 TEST_HEADER include/spdk/opal.h 00:02:59.089 TEST_HEADER include/spdk/pci_ids.h 00:02:59.089 TEST_HEADER include/spdk/pipe.h 00:02:59.089 TEST_HEADER include/spdk/reduce.h 00:02:59.089 TEST_HEADER include/spdk/scheduler.h 00:02:59.089 TEST_HEADER include/spdk/queue.h 00:02:59.089 TEST_HEADER include/spdk/scsi.h 00:02:59.089 TEST_HEADER include/spdk/rpc.h 00:02:59.089 TEST_HEADER include/spdk/sock.h 00:02:59.089 TEST_HEADER include/spdk/scsi_spec.h 00:02:59.089 TEST_HEADER include/spdk/thread.h 00:02:59.089 TEST_HEADER include/spdk/string.h 00:02:59.089 TEST_HEADER include/spdk/stdinc.h 00:02:59.089 TEST_HEADER include/spdk/trace_parser.h 00:02:59.089 TEST_HEADER include/spdk/trace.h 00:02:59.089 TEST_HEADER include/spdk/tree.h 00:02:59.089 TEST_HEADER include/spdk/ublk.h 00:02:59.089 TEST_HEADER include/spdk/util.h 00:02:59.089 TEST_HEADER include/spdk/uuid.h 00:02:59.089 TEST_HEADER include/spdk/version.h 00:02:59.089 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:59.089 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:59.089 TEST_HEADER include/spdk/vmd.h 00:02:59.089 TEST_HEADER include/spdk/vhost.h 00:02:59.089 TEST_HEADER include/spdk/xor.h 00:02:59.089 TEST_HEADER include/spdk/zipf.h 00:02:59.089 CXX test/cpp_headers/accel.o 00:02:59.089 CXX test/cpp_headers/accel_module.o 00:02:59.089 CXX test/cpp_headers/assert.o 00:02:59.089 CXX test/cpp_headers/barrier.o 00:02:59.089 CXX test/cpp_headers/bdev_module.o 00:02:59.089 CXX test/cpp_headers/bdev.o 00:02:59.090 CXX test/cpp_headers/base64.o 00:02:59.090 CXX test/cpp_headers/bit_array.o 00:02:59.090 CXX test/cpp_headers/bdev_zone.o 00:02:59.090 CXX test/cpp_headers/bit_pool.o 00:02:59.090 CXX test/cpp_headers/blob_bdev.o 00:02:59.090 CXX test/cpp_headers/blob.o 00:02:59.090 CXX test/cpp_headers/blobfs_bdev.o 00:02:59.090 CXX test/cpp_headers/blobfs.o 00:02:59.090 CXX test/cpp_headers/config.o 00:02:59.090 CXX test/cpp_headers/conf.o 00:02:59.090 CXX test/cpp_headers/crc16.o 00:02:59.090 CXX test/cpp_headers/cpuset.o 00:02:59.090 CXX test/cpp_headers/crc32.o 00:02:59.090 CXX test/cpp_headers/dif.o 00:02:59.090 CXX test/cpp_headers/endian.o 00:02:59.090 CXX test/cpp_headers/crc64.o 00:02:59.090 CXX test/cpp_headers/dma.o 00:02:59.090 CXX test/cpp_headers/event.o 00:02:59.090 CXX test/cpp_headers/env_dpdk.o 00:02:59.090 CXX test/cpp_headers/env.o 00:02:59.090 CXX test/cpp_headers/fd_group.o 00:02:59.090 CXX test/cpp_headers/fd.o 00:02:59.090 CXX test/cpp_headers/file.o 00:02:59.090 CXX test/cpp_headers/fsdev_module.o 00:02:59.090 CXX test/cpp_headers/fsdev.o 00:02:59.090 CXX test/cpp_headers/fuse_dispatcher.o 00:02:59.090 CXX test/cpp_headers/ftl.o 00:02:59.090 CXX test/cpp_headers/hexlify.o 00:02:59.090 CXX test/cpp_headers/gpt_spec.o 00:02:59.090 CXX test/cpp_headers/idxd.o 00:02:59.090 CXX test/cpp_headers/idxd_spec.o 00:02:59.090 CXX test/cpp_headers/histogram_data.o 00:02:59.090 CXX test/cpp_headers/init.o 00:02:59.090 CXX test/cpp_headers/ioat.o 00:02:59.090 CXX test/cpp_headers/ioat_spec.o 00:02:59.090 CXX test/cpp_headers/iscsi_spec.o 00:02:59.090 CXX test/cpp_headers/json.o 00:02:59.090 CXX test/cpp_headers/jsonrpc.o 00:02:59.090 CXX test/cpp_headers/keyring_module.o 00:02:59.090 CXX test/cpp_headers/keyring.o 00:02:59.090 CXX test/cpp_headers/likely.o 00:02:59.090 CXX test/cpp_headers/log.o 00:02:59.090 CXX test/cpp_headers/lvol.o 00:02:59.090 CC app/fio/nvme/fio_plugin.o 00:02:59.090 CXX test/cpp_headers/md5.o 00:02:59.090 CC test/app/stub/stub.o 00:02:59.090 CC test/app/jsoncat/jsoncat.o 00:02:59.090 CXX test/cpp_headers/nbd.o 00:02:59.090 CXX test/cpp_headers/memory.o 00:02:59.090 CXX test/cpp_headers/mmio.o 00:02:59.090 CC test/app/histogram_perf/histogram_perf.o 00:02:59.090 CXX test/cpp_headers/net.o 00:02:59.090 CXX test/cpp_headers/notify.o 00:02:59.090 CXX test/cpp_headers/nvme.o 00:02:59.090 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:59.090 CXX test/cpp_headers/nvme_intel.o 00:02:59.090 CXX test/cpp_headers/nvme_ocssd.o 00:02:59.090 CXX test/cpp_headers/nvme_spec.o 00:02:59.090 CXX test/cpp_headers/nvmf_cmd.o 00:02:59.090 CXX test/cpp_headers/nvme_zns.o 00:02:59.090 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:59.090 CXX test/cpp_headers/nvmf.o 00:02:59.090 CXX test/cpp_headers/nvmf_spec.o 00:02:59.090 CXX test/cpp_headers/nvmf_transport.o 00:02:59.090 CXX test/cpp_headers/opal.o 00:02:59.090 CC examples/util/zipf/zipf.o 00:02:59.090 CXX test/cpp_headers/opal_spec.o 00:02:59.090 CXX test/cpp_headers/pci_ids.o 00:02:59.090 CXX test/cpp_headers/pipe.o 00:02:59.090 CXX test/cpp_headers/queue.o 00:02:59.090 CXX test/cpp_headers/reduce.o 00:02:59.090 CXX test/cpp_headers/rpc.o 00:02:59.090 CXX test/cpp_headers/scheduler.o 00:02:59.090 CXX test/cpp_headers/scsi.o 00:02:59.090 LINK spdk_lspci 00:02:59.090 CXX test/cpp_headers/scsi_spec.o 00:02:59.090 CXX test/cpp_headers/sock.o 00:02:59.090 CXX test/cpp_headers/stdinc.o 00:02:59.090 CXX test/cpp_headers/string.o 00:02:59.090 CC examples/ioat/perf/perf.o 00:02:59.090 CXX test/cpp_headers/thread.o 00:02:59.090 CXX test/cpp_headers/trace.o 00:02:59.371 CC examples/ioat/verify/verify.o 00:02:59.371 CC test/thread/poller_perf/poller_perf.o 00:02:59.371 CC app/fio/bdev/fio_plugin.o 00:02:59.371 CC test/env/pci/pci_ut.o 00:02:59.371 CC test/app/bdev_svc/bdev_svc.o 00:02:59.371 CXX test/cpp_headers/trace_parser.o 00:02:59.371 CC test/dma/test_dma/test_dma.o 00:02:59.371 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:59.371 CC test/env/memory/memory_ut.o 00:02:59.371 CC test/env/vtophys/vtophys.o 00:02:59.371 CXX test/cpp_headers/tree.o 00:02:59.371 LINK spdk_nvme_discover 00:02:59.662 LINK iscsi_tgt 00:02:59.662 CXX test/cpp_headers/ublk.o 00:02:59.662 LINK nvmf_tgt 00:02:59.662 LINK rpc_client_test 00:02:59.662 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:59.662 LINK interrupt_tgt 00:02:59.662 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:59.662 LINK spdk_tgt 00:02:59.925 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:59.925 LINK spdk_trace_record 00:02:59.925 CC test/env/mem_callbacks/mem_callbacks.o 00:02:59.926 LINK jsoncat 00:02:59.926 LINK zipf 00:02:59.926 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:59.926 LINK poller_perf 00:02:59.926 CXX test/cpp_headers/util.o 00:02:59.926 CXX test/cpp_headers/uuid.o 00:02:59.926 CXX test/cpp_headers/version.o 00:02:59.926 CXX test/cpp_headers/vfio_user_pci.o 00:02:59.926 CXX test/cpp_headers/vfio_user_spec.o 00:02:59.926 CXX test/cpp_headers/vhost.o 00:02:59.926 CXX test/cpp_headers/vmd.o 00:02:59.926 CXX test/cpp_headers/xor.o 00:02:59.926 CXX test/cpp_headers/zipf.o 00:02:59.926 LINK histogram_perf 00:02:59.926 LINK bdev_svc 00:02:59.926 LINK stub 00:02:59.926 LINK vtophys 00:02:59.926 LINK env_dpdk_post_init 00:02:59.926 LINK verify 00:03:00.184 LINK spdk_dd 00:03:00.184 LINK ioat_perf 00:03:00.184 LINK spdk_trace 00:03:00.184 LINK pci_ut 00:03:00.443 LINK test_dma 00:03:00.443 LINK spdk_bdev 00:03:00.443 LINK vhost_fuzz 00:03:00.443 LINK spdk_nvme 00:03:00.443 CC test/event/reactor/reactor.o 00:03:00.443 LINK mem_callbacks 00:03:00.443 CC test/event/reactor_perf/reactor_perf.o 00:03:00.443 CC examples/sock/hello_world/hello_sock.o 00:03:00.443 CC examples/idxd/perf/perf.o 00:03:00.443 LINK nvme_fuzz 00:03:00.443 CC examples/vmd/lsvmd/lsvmd.o 00:03:00.443 CC test/event/event_perf/event_perf.o 00:03:00.443 CC examples/vmd/led/led.o 00:03:00.443 CC test/event/app_repeat/app_repeat.o 00:03:00.443 CC examples/thread/thread/thread_ex.o 00:03:00.443 CC test/event/scheduler/scheduler.o 00:03:00.443 CC app/vhost/vhost.o 00:03:00.443 LINK spdk_nvme_identify 00:03:00.702 LINK spdk_top 00:03:00.702 LINK spdk_nvme_perf 00:03:00.702 LINK reactor 00:03:00.702 LINK reactor_perf 00:03:00.702 LINK lsvmd 00:03:00.702 LINK event_perf 00:03:00.702 LINK led 00:03:00.702 LINK app_repeat 00:03:00.702 LINK vhost 00:03:00.702 LINK hello_sock 00:03:00.702 LINK thread 00:03:00.702 LINK scheduler 00:03:00.961 LINK idxd_perf 00:03:00.961 CC test/nvme/aer/aer.o 00:03:00.961 CC test/nvme/e2edp/nvme_dp.o 00:03:00.961 CC test/nvme/cuse/cuse.o 00:03:00.961 CC test/nvme/startup/startup.o 00:03:00.961 CC test/nvme/reserve/reserve.o 00:03:00.961 CC test/nvme/err_injection/err_injection.o 00:03:00.961 CC test/nvme/connect_stress/connect_stress.o 00:03:00.961 CC test/nvme/reset/reset.o 00:03:00.961 CC test/nvme/boot_partition/boot_partition.o 00:03:00.961 CC test/nvme/simple_copy/simple_copy.o 00:03:00.961 CC test/nvme/overhead/overhead.o 00:03:00.961 CC test/nvme/fdp/fdp.o 00:03:00.961 CC test/nvme/compliance/nvme_compliance.o 00:03:00.961 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:00.961 CC test/blobfs/mkfs/mkfs.o 00:03:00.961 CC test/nvme/sgl/sgl.o 00:03:00.961 CC test/nvme/fused_ordering/fused_ordering.o 00:03:00.961 CC test/accel/dif/dif.o 00:03:00.961 LINK memory_ut 00:03:00.961 CC test/lvol/esnap/esnap.o 00:03:00.961 LINK boot_partition 00:03:00.961 LINK startup 00:03:00.961 LINK connect_stress 00:03:00.961 LINK err_injection 00:03:00.961 LINK reserve 00:03:00.961 LINK doorbell_aers 00:03:00.961 LINK mkfs 00:03:01.220 LINK fused_ordering 00:03:01.220 LINK simple_copy 00:03:01.220 LINK nvme_dp 00:03:01.220 LINK reset 00:03:01.220 LINK aer 00:03:01.220 LINK sgl 00:03:01.220 LINK overhead 00:03:01.220 CC examples/nvme/abort/abort.o 00:03:01.220 CC examples/nvme/hotplug/hotplug.o 00:03:01.220 CC examples/nvme/hello_world/hello_world.o 00:03:01.220 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:01.220 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:01.220 CC examples/nvme/arbitration/arbitration.o 00:03:01.220 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:01.220 CC examples/nvme/reconnect/reconnect.o 00:03:01.220 LINK fdp 00:03:01.220 LINK nvme_compliance 00:03:01.220 CC examples/accel/perf/accel_perf.o 00:03:01.220 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:01.220 CC examples/blob/hello_world/hello_blob.o 00:03:01.220 CC examples/blob/cli/blobcli.o 00:03:01.478 LINK cmb_copy 00:03:01.478 LINK pmr_persistence 00:03:01.478 LINK hotplug 00:03:01.478 LINK hello_world 00:03:01.478 LINK arbitration 00:03:01.478 LINK abort 00:03:01.478 LINK reconnect 00:03:01.478 LINK hello_blob 00:03:01.478 LINK hello_fsdev 00:03:01.737 LINK dif 00:03:01.737 LINK iscsi_fuzz 00:03:01.737 LINK nvme_manage 00:03:01.737 LINK accel_perf 00:03:01.737 LINK blobcli 00:03:01.995 LINK cuse 00:03:02.254 CC test/bdev/bdevio/bdevio.o 00:03:02.254 CC examples/bdev/hello_world/hello_bdev.o 00:03:02.254 CC examples/bdev/bdevperf/bdevperf.o 00:03:02.513 LINK bdevio 00:03:02.513 LINK hello_bdev 00:03:03.082 LINK bdevperf 00:03:03.668 CC examples/nvmf/nvmf/nvmf.o 00:03:03.927 LINK nvmf 00:03:05.836 LINK esnap 00:03:06.095 00:03:06.095 real 0m58.326s 00:03:06.095 user 8m16.321s 00:03:06.095 sys 4m13.245s 00:03:06.095 10:35:44 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:06.095 10:35:44 make -- common/autotest_common.sh@10 -- $ set +x 00:03:06.095 ************************************ 00:03:06.095 END TEST make 00:03:06.095 ************************************ 00:03:06.095 10:35:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:06.095 10:35:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:06.095 10:35:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:06.095 10:35:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.095 10:35:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:06.095 10:35:44 -- pm/common@44 -- $ pid=1381246 00:03:06.095 10:35:44 -- pm/common@50 -- $ kill -TERM 1381246 00:03:06.095 10:35:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.096 10:35:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:06.096 10:35:44 -- pm/common@44 -- $ pid=1381248 00:03:06.096 10:35:44 -- pm/common@50 -- $ kill -TERM 1381248 00:03:06.096 10:35:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.096 10:35:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:06.096 10:35:44 -- pm/common@44 -- $ pid=1381250 00:03:06.096 10:35:44 -- pm/common@50 -- $ kill -TERM 1381250 00:03:06.096 10:35:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.096 10:35:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:06.096 10:35:44 -- pm/common@44 -- $ pid=1381266 00:03:06.096 10:35:44 -- pm/common@50 -- $ sudo -E kill -TERM 1381266 00:03:06.096 10:35:44 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:06.096 10:35:44 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:06.356 10:35:45 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:06.356 10:35:45 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:06.356 10:35:45 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:06.356 10:35:45 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:06.356 10:35:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:06.356 10:35:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:06.356 10:35:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:06.356 10:35:45 -- scripts/common.sh@336 -- # IFS=.-: 00:03:06.356 10:35:45 -- scripts/common.sh@336 -- # read -ra ver1 00:03:06.356 10:35:45 -- scripts/common.sh@337 -- # IFS=.-: 00:03:06.356 10:35:45 -- scripts/common.sh@337 -- # read -ra ver2 00:03:06.356 10:35:45 -- scripts/common.sh@338 -- # local 'op=<' 00:03:06.356 10:35:45 -- scripts/common.sh@340 -- # ver1_l=2 00:03:06.356 10:35:45 -- scripts/common.sh@341 -- # ver2_l=1 00:03:06.356 10:35:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:06.356 10:35:45 -- scripts/common.sh@344 -- # case "$op" in 00:03:06.356 10:35:45 -- scripts/common.sh@345 -- # : 1 00:03:06.356 10:35:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:06.356 10:35:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:06.356 10:35:45 -- scripts/common.sh@365 -- # decimal 1 00:03:06.356 10:35:45 -- scripts/common.sh@353 -- # local d=1 00:03:06.356 10:35:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:06.356 10:35:45 -- scripts/common.sh@355 -- # echo 1 00:03:06.356 10:35:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:06.357 10:35:45 -- scripts/common.sh@366 -- # decimal 2 00:03:06.357 10:35:45 -- scripts/common.sh@353 -- # local d=2 00:03:06.357 10:35:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:06.357 10:35:45 -- scripts/common.sh@355 -- # echo 2 00:03:06.357 10:35:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:06.357 10:35:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:06.357 10:35:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:06.357 10:35:45 -- scripts/common.sh@368 -- # return 0 00:03:06.357 10:35:45 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:06.357 10:35:45 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.357 --rc genhtml_branch_coverage=1 00:03:06.357 --rc genhtml_function_coverage=1 00:03:06.357 --rc genhtml_legend=1 00:03:06.357 --rc geninfo_all_blocks=1 00:03:06.357 --rc geninfo_unexecuted_blocks=1 00:03:06.357 00:03:06.357 ' 00:03:06.357 10:35:45 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.357 --rc genhtml_branch_coverage=1 00:03:06.357 --rc genhtml_function_coverage=1 00:03:06.357 --rc genhtml_legend=1 00:03:06.357 --rc geninfo_all_blocks=1 00:03:06.357 --rc geninfo_unexecuted_blocks=1 00:03:06.357 00:03:06.357 ' 00:03:06.357 10:35:45 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.357 --rc genhtml_branch_coverage=1 00:03:06.357 --rc genhtml_function_coverage=1 00:03:06.357 --rc genhtml_legend=1 00:03:06.357 --rc geninfo_all_blocks=1 00:03:06.357 --rc geninfo_unexecuted_blocks=1 00:03:06.357 00:03:06.357 ' 00:03:06.357 10:35:45 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.357 --rc genhtml_branch_coverage=1 00:03:06.357 --rc genhtml_function_coverage=1 00:03:06.357 --rc genhtml_legend=1 00:03:06.357 --rc geninfo_all_blocks=1 00:03:06.357 --rc geninfo_unexecuted_blocks=1 00:03:06.357 00:03:06.357 ' 00:03:06.357 10:35:45 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:06.357 10:35:45 -- nvmf/common.sh@7 -- # uname -s 00:03:06.357 10:35:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:06.357 10:35:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:06.357 10:35:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:06.357 10:35:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:06.357 10:35:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:06.357 10:35:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:06.357 10:35:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:06.357 10:35:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:06.357 10:35:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:06.357 10:35:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:06.357 10:35:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:06.357 10:35:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:06.357 10:35:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:06.357 10:35:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:06.357 10:35:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:06.357 10:35:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:06.357 10:35:45 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:06.357 10:35:45 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:06.357 10:35:45 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:06.357 10:35:45 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:06.357 10:35:45 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:06.357 10:35:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.357 10:35:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.357 10:35:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.357 10:35:45 -- paths/export.sh@5 -- # export PATH 00:03:06.357 10:35:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.357 10:35:45 -- nvmf/common.sh@51 -- # : 0 00:03:06.357 10:35:45 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:06.357 10:35:45 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:06.357 10:35:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:06.357 10:35:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:06.357 10:35:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:06.357 10:35:45 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:06.357 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:06.357 10:35:45 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:06.357 10:35:45 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:06.357 10:35:45 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:06.357 10:35:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:06.357 10:35:45 -- spdk/autotest.sh@32 -- # uname -s 00:03:06.357 10:35:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:06.357 10:35:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:06.357 10:35:45 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:06.357 10:35:45 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:06.357 10:35:45 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:06.357 10:35:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:06.357 10:35:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:06.357 10:35:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:06.357 10:35:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:06.357 10:35:45 -- spdk/autotest.sh@48 -- # udevadm_pid=1444640 00:03:06.357 10:35:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:06.357 10:35:45 -- pm/common@17 -- # local monitor 00:03:06.357 10:35:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.357 10:35:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.357 10:35:45 -- pm/common@21 -- # date +%s 00:03:06.357 10:35:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.357 10:35:45 -- pm/common@21 -- # date +%s 00:03:06.357 10:35:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.357 10:35:45 -- pm/common@25 -- # sleep 1 00:03:06.357 10:35:45 -- pm/common@21 -- # date +%s 00:03:06.357 10:35:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732786545 00:03:06.357 10:35:45 -- pm/common@21 -- # date +%s 00:03:06.357 10:35:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732786545 00:03:06.357 10:35:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732786545 00:03:06.357 10:35:45 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732786545 00:03:06.616 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732786545_collect-cpu-load.pm.log 00:03:06.616 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732786545_collect-vmstat.pm.log 00:03:06.616 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732786545_collect-cpu-temp.pm.log 00:03:06.616 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732786545_collect-bmc-pm.bmc.pm.log 00:03:07.554 10:35:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:07.554 10:35:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:07.554 10:35:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:07.554 10:35:46 -- common/autotest_common.sh@10 -- # set +x 00:03:07.554 10:35:46 -- spdk/autotest.sh@59 -- # create_test_list 00:03:07.554 10:35:46 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:07.554 10:35:46 -- common/autotest_common.sh@10 -- # set +x 00:03:07.555 10:35:46 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:07.555 10:35:46 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:07.555 10:35:46 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:07.555 10:35:46 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:07.555 10:35:46 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:07.555 10:35:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:07.555 10:35:46 -- common/autotest_common.sh@1457 -- # uname 00:03:07.555 10:35:46 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:07.555 10:35:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:07.555 10:35:46 -- common/autotest_common.sh@1477 -- # uname 00:03:07.555 10:35:46 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:07.555 10:35:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:07.555 10:35:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:07.555 lcov: LCOV version 1.15 00:03:07.555 10:35:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:19.759 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:19.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:34.654 10:36:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:34.654 10:36:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.654 10:36:11 -- common/autotest_common.sh@10 -- # set +x 00:03:34.654 10:36:11 -- spdk/autotest.sh@78 -- # rm -f 00:03:34.654 10:36:11 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.563 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:36.563 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:36.822 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:36.822 10:36:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:36.822 10:36:15 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:36.822 10:36:15 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:36.822 10:36:15 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:36.822 10:36:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:36.822 10:36:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:36.822 10:36:15 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:36.822 10:36:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.822 10:36:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:36.822 10:36:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:36.822 10:36:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.822 10:36:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:36.822 10:36:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:36.822 10:36:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:36.822 10:36:15 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:36.822 No valid GPT data, bailing 00:03:36.822 10:36:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:36.822 10:36:15 -- scripts/common.sh@394 -- # pt= 00:03:36.822 10:36:15 -- scripts/common.sh@395 -- # return 1 00:03:36.822 10:36:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:36.823 1+0 records in 00:03:36.823 1+0 records out 00:03:36.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464016 s, 226 MB/s 00:03:36.823 10:36:15 -- spdk/autotest.sh@105 -- # sync 00:03:36.823 10:36:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:36.823 10:36:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:36.823 10:36:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.089 10:36:22 -- spdk/autotest.sh@111 -- # uname -s 00:03:45.089 10:36:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:45.089 10:36:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:45.089 10:36:22 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:48.375 Hugepages 00:03:48.375 node hugesize free / total 00:03:48.375 node0 1048576kB 0 / 0 00:03:48.375 node0 2048kB 0 / 0 00:03:48.375 node1 1048576kB 0 / 0 00:03:48.375 node1 2048kB 0 / 0 00:03:48.375 00:03:48.375 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.375 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:48.375 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:48.375 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:48.375 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:48.375 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:48.375 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:48.375 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:48.375 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:48.375 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:48.375 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:48.375 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:48.375 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:48.375 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:48.375 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:48.375 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:48.375 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:48.375 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:48.375 10:36:26 -- spdk/autotest.sh@117 -- # uname -s 00:03:48.375 10:36:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:48.375 10:36:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:48.375 10:36:26 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:52.567 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:54.473 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:54.473 10:36:32 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:55.410 10:36:33 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:55.410 10:36:33 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:55.410 10:36:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:55.410 10:36:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:55.410 10:36:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:55.410 10:36:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:55.410 10:36:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.410 10:36:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:55.410 10:36:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:55.410 10:36:34 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:55.410 10:36:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:55.410 10:36:34 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.601 Waiting for block devices as requested 00:03:59.601 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:59.601 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:59.601 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:59.601 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:59.601 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:59.601 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:59.601 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:59.601 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:59.601 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:59.860 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:59.860 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:59.860 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:00.118 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:00.118 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:00.118 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:00.377 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:00.377 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:00.634 10:36:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.634 10:36:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:00.634 10:36:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:00.634 10:36:39 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:04:00.634 10:36:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:00.634 10:36:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:00.634 10:36:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:00.634 10:36:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:00.634 10:36:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:00.634 10:36:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:00.634 10:36:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.634 10:36:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:00.634 10:36:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.634 10:36:39 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:00.634 10:36:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.634 10:36:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.634 10:36:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:00.634 10:36:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.634 10:36:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.634 10:36:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.634 10:36:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.634 10:36:39 -- common/autotest_common.sh@1543 -- # continue 00:04:00.634 10:36:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:00.634 10:36:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.634 10:36:39 -- common/autotest_common.sh@10 -- # set +x 00:04:00.634 10:36:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:00.634 10:36:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.634 10:36:39 -- common/autotest_common.sh@10 -- # set +x 00:04:00.634 10:36:39 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:04.822 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.822 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.199 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:06.468 10:36:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:06.468 10:36:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.468 10:36:45 -- common/autotest_common.sh@10 -- # set +x 00:04:06.468 10:36:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:06.468 10:36:45 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:06.468 10:36:45 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:06.468 10:36:45 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:06.468 10:36:45 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:06.468 10:36:45 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:06.468 10:36:45 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:06.468 10:36:45 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:06.468 10:36:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.468 10:36:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.468 10:36:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.468 10:36:45 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.468 10:36:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.468 10:36:45 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:06.468 10:36:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:06.468 10:36:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:06.468 10:36:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:06.468 10:36:45 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:06.468 10:36:45 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:06.468 10:36:45 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:06.468 10:36:45 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:06.468 10:36:45 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:04:06.468 10:36:45 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:04:06.468 10:36:45 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1462266 00:04:06.468 10:36:45 -- common/autotest_common.sh@1585 -- # waitforlisten 1462266 00:04:06.468 10:36:45 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.468 10:36:45 -- common/autotest_common.sh@835 -- # '[' -z 1462266 ']' 00:04:06.468 10:36:45 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.468 10:36:45 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.468 10:36:45 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.468 10:36:45 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.468 10:36:45 -- common/autotest_common.sh@10 -- # set +x 00:04:06.729 [2024-11-28 10:36:45.418068] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:06.729 [2024-11-28 10:36:45.418168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462266 ] 00:04:06.729 [2024-11-28 10:36:45.569629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.988 [2024-11-28 10:36:45.665811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.556 10:36:46 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.556 10:36:46 -- common/autotest_common.sh@868 -- # return 0 00:04:07.556 10:36:46 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:07.556 10:36:46 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:07.556 10:36:46 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:10.840 nvme0n1 00:04:10.840 10:36:49 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:10.840 [2024-11-28 10:36:49.636859] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:10.840 request: 00:04:10.840 { 00:04:10.840 "nvme_ctrlr_name": "nvme0", 00:04:10.840 "password": "test", 00:04:10.840 "method": "bdev_nvme_opal_revert", 00:04:10.840 "req_id": 1 00:04:10.840 } 00:04:10.840 Got JSON-RPC error response 00:04:10.840 response: 00:04:10.840 { 00:04:10.840 "code": -32602, 00:04:10.840 "message": "Invalid parameters" 00:04:10.840 } 00:04:10.840 10:36:49 -- common/autotest_common.sh@1591 -- # true 00:04:10.840 10:36:49 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:10.840 10:36:49 -- common/autotest_common.sh@1595 -- # killprocess 1462266 00:04:10.841 10:36:49 -- common/autotest_common.sh@954 -- # '[' -z 1462266 ']' 00:04:10.841 10:36:49 -- common/autotest_common.sh@958 -- # kill -0 1462266 00:04:10.841 10:36:49 -- common/autotest_common.sh@959 -- # uname 00:04:10.841 10:36:49 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.841 10:36:49 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1462266 00:04:11.099 10:36:49 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.100 10:36:49 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.100 10:36:49 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1462266' 00:04:11.100 killing process with pid 1462266 00:04:11.100 10:36:49 -- common/autotest_common.sh@973 -- # kill 1462266 00:04:11.100 10:36:49 -- common/autotest_common.sh@978 -- # wait 1462266 00:04:15.289 10:36:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:15.289 10:36:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:15.289 10:36:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.289 10:36:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.289 10:36:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:15.289 10:36:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.289 10:36:54 -- common/autotest_common.sh@10 -- # set +x 00:04:15.289 10:36:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:15.289 10:36:54 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:15.289 10:36:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.289 10:36:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.289 10:36:54 -- common/autotest_common.sh@10 -- # set +x 00:04:15.548 ************************************ 00:04:15.548 START TEST env 00:04:15.548 ************************************ 00:04:15.548 10:36:54 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:15.548 * Looking for test storage... 00:04:15.548 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:15.548 10:36:54 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.548 10:36:54 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.548 10:36:54 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.548 10:36:54 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.548 10:36:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.548 10:36:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.548 10:36:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.548 10:36:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.548 10:36:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.548 10:36:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.548 10:36:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.548 10:36:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.549 10:36:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.549 10:36:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.549 10:36:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.549 10:36:54 env -- scripts/common.sh@344 -- # case "$op" in 00:04:15.549 10:36:54 env -- scripts/common.sh@345 -- # : 1 00:04:15.549 10:36:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.549 10:36:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.549 10:36:54 env -- scripts/common.sh@365 -- # decimal 1 00:04:15.549 10:36:54 env -- scripts/common.sh@353 -- # local d=1 00:04:15.549 10:36:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.549 10:36:54 env -- scripts/common.sh@355 -- # echo 1 00:04:15.549 10:36:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.549 10:36:54 env -- scripts/common.sh@366 -- # decimal 2 00:04:15.549 10:36:54 env -- scripts/common.sh@353 -- # local d=2 00:04:15.549 10:36:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.549 10:36:54 env -- scripts/common.sh@355 -- # echo 2 00:04:15.549 10:36:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.549 10:36:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.549 10:36:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.549 10:36:54 env -- scripts/common.sh@368 -- # return 0 00:04:15.549 10:36:54 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.549 10:36:54 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.549 --rc genhtml_branch_coverage=1 00:04:15.549 --rc genhtml_function_coverage=1 00:04:15.549 --rc genhtml_legend=1 00:04:15.549 --rc geninfo_all_blocks=1 00:04:15.549 --rc geninfo_unexecuted_blocks=1 00:04:15.549 00:04:15.549 ' 00:04:15.549 10:36:54 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.549 --rc genhtml_branch_coverage=1 00:04:15.549 --rc genhtml_function_coverage=1 00:04:15.549 --rc genhtml_legend=1 00:04:15.549 --rc geninfo_all_blocks=1 00:04:15.549 --rc geninfo_unexecuted_blocks=1 00:04:15.549 00:04:15.549 ' 00:04:15.549 10:36:54 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.549 --rc genhtml_branch_coverage=1 00:04:15.549 --rc genhtml_function_coverage=1 00:04:15.549 --rc genhtml_legend=1 00:04:15.549 --rc geninfo_all_blocks=1 00:04:15.549 --rc geninfo_unexecuted_blocks=1 00:04:15.549 00:04:15.549 ' 00:04:15.549 10:36:54 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.549 --rc genhtml_branch_coverage=1 00:04:15.549 --rc genhtml_function_coverage=1 00:04:15.549 --rc genhtml_legend=1 00:04:15.549 --rc geninfo_all_blocks=1 00:04:15.549 --rc geninfo_unexecuted_blocks=1 00:04:15.549 00:04:15.549 ' 00:04:15.549 10:36:54 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.549 10:36:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.549 10:36:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.549 10:36:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.549 ************************************ 00:04:15.549 START TEST env_memory 00:04:15.549 ************************************ 00:04:15.549 10:36:54 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.808 00:04:15.808 00:04:15.808 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.808 http://cunit.sourceforge.net/ 00:04:15.808 00:04:15.808 00:04:15.808 Suite: memory 00:04:15.808 Test: alloc and free memory map ...[2024-11-28 10:36:54.462510] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.808 passed 00:04:15.808 Test: mem map translation ...[2024-11-28 10:36:54.496747] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.808 [2024-11-28 10:36:54.496776] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.808 [2024-11-28 10:36:54.496830] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.808 [2024-11-28 10:36:54.496849] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.808 passed 00:04:15.808 Test: mem map registration ...[2024-11-28 10:36:54.551365] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:15.808 [2024-11-28 10:36:54.551393] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:15.808 passed 00:04:15.808 Test: mem map adjacent registrations ...passed 00:04:15.808 00:04:15.808 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.808 suites 1 1 n/a 0 0 00:04:15.808 tests 4 4 4 0 0 00:04:15.808 asserts 152 152 152 0 n/a 00:04:15.808 00:04:15.808 Elapsed time = 0.195 seconds 00:04:15.808 00:04:15.808 real 0m0.227s 00:04:15.808 user 0m0.206s 00:04:15.808 sys 0m0.020s 00:04:15.808 10:36:54 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.808 10:36:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.808 ************************************ 00:04:15.808 END TEST env_memory 00:04:15.808 ************************************ 00:04:15.808 10:36:54 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.808 10:36:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.808 10:36:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.808 10:36:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.068 ************************************ 00:04:16.068 START TEST env_vtophys 00:04:16.068 ************************************ 00:04:16.068 10:36:54 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:16.068 EAL: lib.eal log level changed from notice to debug 00:04:16.068 EAL: Detected lcore 0 as core 0 on socket 0 00:04:16.068 EAL: Detected lcore 1 as core 1 on socket 0 00:04:16.068 EAL: Detected lcore 2 as core 2 on socket 0 00:04:16.068 EAL: Detected lcore 3 as core 3 on socket 0 00:04:16.068 EAL: Detected lcore 4 as core 4 on socket 0 00:04:16.068 EAL: Detected lcore 5 as core 5 on socket 0 00:04:16.068 EAL: Detected lcore 6 as core 6 on socket 0 00:04:16.068 EAL: Detected lcore 7 as core 8 on socket 0 00:04:16.068 EAL: Detected lcore 8 as core 9 on socket 0 00:04:16.068 EAL: Detected lcore 9 as core 10 on socket 0 00:04:16.068 EAL: Detected lcore 10 as core 11 on socket 0 00:04:16.068 EAL: Detected lcore 11 as core 12 on socket 0 00:04:16.068 EAL: Detected lcore 12 as core 13 on socket 0 00:04:16.068 EAL: Detected lcore 13 as core 14 on socket 0 00:04:16.068 EAL: Detected lcore 14 as core 16 on socket 0 00:04:16.068 EAL: Detected lcore 15 as core 17 on socket 0 00:04:16.068 EAL: Detected lcore 16 as core 18 on socket 0 00:04:16.068 EAL: Detected lcore 17 as core 19 on socket 0 00:04:16.068 EAL: Detected lcore 18 as core 20 on socket 0 00:04:16.068 EAL: Detected lcore 19 as core 21 on socket 0 00:04:16.068 EAL: Detected lcore 20 as core 22 on socket 0 00:04:16.068 EAL: Detected lcore 21 as core 24 on socket 0 00:04:16.068 EAL: Detected lcore 22 as core 25 on socket 0 00:04:16.068 EAL: Detected lcore 23 as core 26 on socket 0 00:04:16.068 EAL: Detected lcore 24 as core 27 on socket 0 00:04:16.068 EAL: Detected lcore 25 as core 28 on socket 0 00:04:16.068 EAL: Detected lcore 26 as core 29 on socket 0 00:04:16.068 EAL: Detected lcore 27 as core 30 on socket 0 00:04:16.068 EAL: Detected lcore 28 as core 0 on socket 1 00:04:16.068 EAL: Detected lcore 29 as core 1 on socket 1 00:04:16.068 EAL: Detected lcore 30 as core 2 on socket 1 00:04:16.068 EAL: Detected lcore 31 as core 3 on socket 1 00:04:16.068 EAL: Detected lcore 32 as core 4 on socket 1 00:04:16.068 EAL: Detected lcore 33 as core 5 on socket 1 00:04:16.068 EAL: Detected lcore 34 as core 6 on socket 1 00:04:16.068 EAL: Detected lcore 35 as core 8 on socket 1 00:04:16.068 EAL: Detected lcore 36 as core 9 on socket 1 00:04:16.068 EAL: Detected lcore 37 as core 10 on socket 1 00:04:16.068 EAL: Detected lcore 38 as core 11 on socket 1 00:04:16.068 EAL: Detected lcore 39 as core 12 on socket 1 00:04:16.068 EAL: Detected lcore 40 as core 13 on socket 1 00:04:16.068 EAL: Detected lcore 41 as core 14 on socket 1 00:04:16.068 EAL: Detected lcore 42 as core 16 on socket 1 00:04:16.068 EAL: Detected lcore 43 as core 17 on socket 1 00:04:16.068 EAL: Detected lcore 44 as core 18 on socket 1 00:04:16.068 EAL: Detected lcore 45 as core 19 on socket 1 00:04:16.068 EAL: Detected lcore 46 as core 20 on socket 1 00:04:16.068 EAL: Detected lcore 47 as core 21 on socket 1 00:04:16.068 EAL: Detected lcore 48 as core 22 on socket 1 00:04:16.068 EAL: Detected lcore 49 as core 24 on socket 1 00:04:16.068 EAL: Detected lcore 50 as core 25 on socket 1 00:04:16.068 EAL: Detected lcore 51 as core 26 on socket 1 00:04:16.068 EAL: Detected lcore 52 as core 27 on socket 1 00:04:16.068 EAL: Detected lcore 53 as core 28 on socket 1 00:04:16.068 EAL: Detected lcore 54 as core 29 on socket 1 00:04:16.068 EAL: Detected lcore 55 as core 30 on socket 1 00:04:16.068 EAL: Detected lcore 56 as core 0 on socket 0 00:04:16.068 EAL: Detected lcore 57 as core 1 on socket 0 00:04:16.068 EAL: Detected lcore 58 as core 2 on socket 0 00:04:16.068 EAL: Detected lcore 59 as core 3 on socket 0 00:04:16.068 EAL: Detected lcore 60 as core 4 on socket 0 00:04:16.068 EAL: Detected lcore 61 as core 5 on socket 0 00:04:16.068 EAL: Detected lcore 62 as core 6 on socket 0 00:04:16.068 EAL: Detected lcore 63 as core 8 on socket 0 00:04:16.068 EAL: Detected lcore 64 as core 9 on socket 0 00:04:16.068 EAL: Detected lcore 65 as core 10 on socket 0 00:04:16.068 EAL: Detected lcore 66 as core 11 on socket 0 00:04:16.068 EAL: Detected lcore 67 as core 12 on socket 0 00:04:16.068 EAL: Detected lcore 68 as core 13 on socket 0 00:04:16.068 EAL: Detected lcore 69 as core 14 on socket 0 00:04:16.068 EAL: Detected lcore 70 as core 16 on socket 0 00:04:16.068 EAL: Detected lcore 71 as core 17 on socket 0 00:04:16.068 EAL: Detected lcore 72 as core 18 on socket 0 00:04:16.068 EAL: Detected lcore 73 as core 19 on socket 0 00:04:16.068 EAL: Detected lcore 74 as core 20 on socket 0 00:04:16.068 EAL: Detected lcore 75 as core 21 on socket 0 00:04:16.068 EAL: Detected lcore 76 as core 22 on socket 0 00:04:16.068 EAL: Detected lcore 77 as core 24 on socket 0 00:04:16.068 EAL: Detected lcore 78 as core 25 on socket 0 00:04:16.068 EAL: Detected lcore 79 as core 26 on socket 0 00:04:16.068 EAL: Detected lcore 80 as core 27 on socket 0 00:04:16.068 EAL: Detected lcore 81 as core 28 on socket 0 00:04:16.068 EAL: Detected lcore 82 as core 29 on socket 0 00:04:16.068 EAL: Detected lcore 83 as core 30 on socket 0 00:04:16.068 EAL: Detected lcore 84 as core 0 on socket 1 00:04:16.068 EAL: Detected lcore 85 as core 1 on socket 1 00:04:16.068 EAL: Detected lcore 86 as core 2 on socket 1 00:04:16.068 EAL: Detected lcore 87 as core 3 on socket 1 00:04:16.068 EAL: Detected lcore 88 as core 4 on socket 1 00:04:16.068 EAL: Detected lcore 89 as core 5 on socket 1 00:04:16.068 EAL: Detected lcore 90 as core 6 on socket 1 00:04:16.068 EAL: Detected lcore 91 as core 8 on socket 1 00:04:16.068 EAL: Detected lcore 92 as core 9 on socket 1 00:04:16.068 EAL: Detected lcore 93 as core 10 on socket 1 00:04:16.068 EAL: Detected lcore 94 as core 11 on socket 1 00:04:16.068 EAL: Detected lcore 95 as core 12 on socket 1 00:04:16.068 EAL: Detected lcore 96 as core 13 on socket 1 00:04:16.068 EAL: Detected lcore 97 as core 14 on socket 1 00:04:16.068 EAL: Detected lcore 98 as core 16 on socket 1 00:04:16.068 EAL: Detected lcore 99 as core 17 on socket 1 00:04:16.068 EAL: Detected lcore 100 as core 18 on socket 1 00:04:16.068 EAL: Detected lcore 101 as core 19 on socket 1 00:04:16.068 EAL: Detected lcore 102 as core 20 on socket 1 00:04:16.068 EAL: Detected lcore 103 as core 21 on socket 1 00:04:16.068 EAL: Detected lcore 104 as core 22 on socket 1 00:04:16.068 EAL: Detected lcore 105 as core 24 on socket 1 00:04:16.068 EAL: Detected lcore 106 as core 25 on socket 1 00:04:16.068 EAL: Detected lcore 107 as core 26 on socket 1 00:04:16.068 EAL: Detected lcore 108 as core 27 on socket 1 00:04:16.068 EAL: Detected lcore 109 as core 28 on socket 1 00:04:16.068 EAL: Detected lcore 110 as core 29 on socket 1 00:04:16.068 EAL: Detected lcore 111 as core 30 on socket 1 00:04:16.068 EAL: Maximum logical cores by configuration: 128 00:04:16.068 EAL: Detected CPU lcores: 112 00:04:16.068 EAL: Detected NUMA nodes: 2 00:04:16.068 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:16.068 EAL: Detected shared linkage of DPDK 00:04:16.068 EAL: No shared files mode enabled, IPC will be disabled 00:04:16.068 EAL: Bus pci wants IOVA as 'DC' 00:04:16.068 EAL: Buses did not request a specific IOVA mode. 00:04:16.068 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:16.068 EAL: Selected IOVA mode 'VA' 00:04:16.068 EAL: Probing VFIO support... 00:04:16.068 EAL: IOMMU type 1 (Type 1) is supported 00:04:16.068 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:16.068 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:16.068 EAL: VFIO support initialized 00:04:16.068 EAL: Ask a virtual area of 0x2e000 bytes 00:04:16.068 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:16.068 EAL: Setting up physically contiguous memory... 00:04:16.068 EAL: Setting maximum number of open files to 524288 00:04:16.068 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:16.068 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:16.068 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:16.068 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.068 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:16.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.068 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.068 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:16.068 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:16.068 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.068 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:16.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.068 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.068 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:16.068 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:16.068 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.068 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:16.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.068 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.068 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:16.069 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:16.069 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.069 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:16.069 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.069 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.069 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:16.069 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:16.069 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:16.069 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.069 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:16.069 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.069 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.069 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:16.069 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:16.069 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.069 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:16.069 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.069 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.069 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:16.069 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:16.069 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.069 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:16.069 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.069 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.069 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:16.069 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:16.069 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.069 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:16.069 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.069 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.069 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:16.069 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:16.069 EAL: Hugepages will be freed exactly as allocated. 00:04:16.069 EAL: No shared files mode enabled, IPC is disabled 00:04:16.069 EAL: No shared files mode enabled, IPC is disabled 00:04:16.069 EAL: TSC frequency is ~2500000 KHz 00:04:16.069 EAL: Main lcore 0 is ready (tid=7f8bb2c70a40;cpuset=[0]) 00:04:16.069 EAL: Trying to obtain current memory policy. 00:04:16.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.069 EAL: Restoring previous memory policy: 0 00:04:16.069 EAL: request: mp_malloc_sync 00:04:16.069 EAL: No shared files mode enabled, IPC is disabled 00:04:16.069 EAL: Heap on socket 0 was expanded by 2MB 00:04:16.069 EAL: No shared files mode enabled, IPC is disabled 00:04:16.069 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:16.069 EAL: Mem event callback 'spdk:(nil)' registered 00:04:16.069 00:04:16.069 00:04:16.069 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.069 http://cunit.sourceforge.net/ 00:04:16.069 00:04:16.069 00:04:16.069 Suite: components_suite 00:04:16.637 Test: vtophys_malloc_test ...passed 00:04:16.637 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:16.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.637 EAL: Restoring previous memory policy: 4 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was expanded by 4MB 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was shrunk by 4MB 00:04:16.637 EAL: Trying to obtain current memory policy. 00:04:16.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.637 EAL: Restoring previous memory policy: 4 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was expanded by 6MB 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was shrunk by 6MB 00:04:16.637 EAL: Trying to obtain current memory policy. 00:04:16.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.637 EAL: Restoring previous memory policy: 4 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was expanded by 10MB 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was shrunk by 10MB 00:04:16.637 EAL: Trying to obtain current memory policy. 00:04:16.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.637 EAL: Restoring previous memory policy: 4 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was expanded by 18MB 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was shrunk by 18MB 00:04:16.637 EAL: Trying to obtain current memory policy. 00:04:16.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.637 EAL: Restoring previous memory policy: 4 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was expanded by 34MB 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was shrunk by 34MB 00:04:16.637 EAL: Trying to obtain current memory policy. 00:04:16.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.637 EAL: Restoring previous memory policy: 4 00:04:16.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.637 EAL: request: mp_malloc_sync 00:04:16.637 EAL: No shared files mode enabled, IPC is disabled 00:04:16.637 EAL: Heap on socket 0 was expanded by 66MB 00:04:16.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.897 EAL: request: mp_malloc_sync 00:04:16.897 EAL: No shared files mode enabled, IPC is disabled 00:04:16.897 EAL: Heap on socket 0 was shrunk by 66MB 00:04:16.897 EAL: Trying to obtain current memory policy. 00:04:16.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.897 EAL: Restoring previous memory policy: 4 00:04:16.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.897 EAL: request: mp_malloc_sync 00:04:16.897 EAL: No shared files mode enabled, IPC is disabled 00:04:16.897 EAL: Heap on socket 0 was expanded by 130MB 00:04:17.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.156 EAL: request: mp_malloc_sync 00:04:17.156 EAL: No shared files mode enabled, IPC is disabled 00:04:17.156 EAL: Heap on socket 0 was shrunk by 130MB 00:04:17.415 EAL: Trying to obtain current memory policy. 00:04:17.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.415 EAL: Restoring previous memory policy: 4 00:04:17.415 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.415 EAL: request: mp_malloc_sync 00:04:17.415 EAL: No shared files mode enabled, IPC is disabled 00:04:17.415 EAL: Heap on socket 0 was expanded by 258MB 00:04:17.674 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.934 EAL: request: mp_malloc_sync 00:04:17.934 EAL: No shared files mode enabled, IPC is disabled 00:04:17.934 EAL: Heap on socket 0 was shrunk by 258MB 00:04:18.193 EAL: Trying to obtain current memory policy. 00:04:18.193 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.453 EAL: Restoring previous memory policy: 4 00:04:18.453 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.453 EAL: request: mp_malloc_sync 00:04:18.453 EAL: No shared files mode enabled, IPC is disabled 00:04:18.453 EAL: Heap on socket 0 was expanded by 514MB 00:04:19.021 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.280 EAL: request: mp_malloc_sync 00:04:19.280 EAL: No shared files mode enabled, IPC is disabled 00:04:19.280 EAL: Heap on socket 0 was shrunk by 514MB 00:04:19.849 EAL: Trying to obtain current memory policy. 00:04:19.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.109 EAL: Restoring previous memory policy: 4 00:04:20.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.109 EAL: request: mp_malloc_sync 00:04:20.109 EAL: No shared files mode enabled, IPC is disabled 00:04:20.109 EAL: Heap on socket 0 was expanded by 1026MB 00:04:22.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.011 EAL: request: mp_malloc_sync 00:04:22.011 EAL: No shared files mode enabled, IPC is disabled 00:04:22.011 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:23.387 passed 00:04:23.387 00:04:23.387 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.387 suites 1 1 n/a 0 0 00:04:23.387 tests 2 2 2 0 0 00:04:23.387 asserts 497 497 497 0 n/a 00:04:23.387 00:04:23.387 Elapsed time = 7.171 seconds 00:04:23.387 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.387 EAL: request: mp_malloc_sync 00:04:23.387 EAL: No shared files mode enabled, IPC is disabled 00:04:23.387 EAL: Heap on socket 0 was shrunk by 2MB 00:04:23.387 EAL: No shared files mode enabled, IPC is disabled 00:04:23.387 EAL: No shared files mode enabled, IPC is disabled 00:04:23.387 EAL: No shared files mode enabled, IPC is disabled 00:04:23.387 00:04:23.387 real 0m7.453s 00:04:23.387 user 0m6.555s 00:04:23.387 sys 0m0.846s 00:04:23.387 10:37:02 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.387 10:37:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:23.387 ************************************ 00:04:23.387 END TEST env_vtophys 00:04:23.387 ************************************ 00:04:23.387 10:37:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:23.387 10:37:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.387 10:37:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.387 10:37:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.387 ************************************ 00:04:23.387 START TEST env_pci 00:04:23.387 ************************************ 00:04:23.387 10:37:02 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:23.646 00:04:23.646 00:04:23.646 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.646 http://cunit.sourceforge.net/ 00:04:23.646 00:04:23.646 00:04:23.646 Suite: pci 00:04:23.646 Test: pci_hook ...[2024-11-28 10:37:02.305207] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1465350 has claimed it 00:04:23.646 EAL: Cannot find device (10000:00:01.0) 00:04:23.646 EAL: Failed to attach device on primary process 00:04:23.646 passed 00:04:23.646 00:04:23.646 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.646 suites 1 1 n/a 0 0 00:04:23.646 tests 1 1 1 0 0 00:04:23.646 asserts 25 25 25 0 n/a 00:04:23.646 00:04:23.646 Elapsed time = 0.062 seconds 00:04:23.646 00:04:23.646 real 0m0.143s 00:04:23.646 user 0m0.059s 00:04:23.646 sys 0m0.083s 00:04:23.646 10:37:02 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.646 10:37:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:23.646 ************************************ 00:04:23.646 END TEST env_pci 00:04:23.646 ************************************ 00:04:23.646 10:37:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:23.646 10:37:02 env -- env/env.sh@15 -- # uname 00:04:23.646 10:37:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:23.646 10:37:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:23.646 10:37:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.646 10:37:02 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:23.646 10:37:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.646 10:37:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.646 ************************************ 00:04:23.646 START TEST env_dpdk_post_init 00:04:23.646 ************************************ 00:04:23.646 10:37:02 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.905 EAL: Detected CPU lcores: 112 00:04:23.905 EAL: Detected NUMA nodes: 2 00:04:23.905 EAL: Detected shared linkage of DPDK 00:04:23.905 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.905 EAL: Selected IOVA mode 'VA' 00:04:23.905 EAL: VFIO support initialized 00:04:23.905 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.905 EAL: Using IOMMU type 1 (Type 1) 00:04:23.905 EAL: Ignore mapping IO port bar(1) 00:04:23.905 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:24.164 EAL: Ignore mapping IO port bar(1) 00:04:24.164 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:25.101 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:29.321 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:29.321 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:29.321 Starting DPDK initialization... 00:04:29.321 Starting SPDK post initialization... 00:04:29.321 SPDK NVMe probe 00:04:29.321 Attaching to 0000:d8:00.0 00:04:29.321 Attached to 0000:d8:00.0 00:04:29.321 Cleaning up... 00:04:29.321 00:04:29.321 real 0m5.519s 00:04:29.321 user 0m3.834s 00:04:29.321 sys 0m0.736s 00:04:29.321 10:37:08 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.321 10:37:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.321 ************************************ 00:04:29.321 END TEST env_dpdk_post_init 00:04:29.321 ************************************ 00:04:29.321 10:37:08 env -- env/env.sh@26 -- # uname 00:04:29.321 10:37:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:29.321 10:37:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.321 10:37:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.321 10:37:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.321 10:37:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.321 ************************************ 00:04:29.321 START TEST env_mem_callbacks 00:04:29.321 ************************************ 00:04:29.321 10:37:08 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.321 EAL: Detected CPU lcores: 112 00:04:29.321 EAL: Detected NUMA nodes: 2 00:04:29.321 EAL: Detected shared linkage of DPDK 00:04:29.321 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.578 EAL: Selected IOVA mode 'VA' 00:04:29.578 EAL: VFIO support initialized 00:04:29.579 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.579 00:04:29.579 00:04:29.579 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.579 http://cunit.sourceforge.net/ 00:04:29.579 00:04:29.579 00:04:29.579 Suite: memory 00:04:29.579 Test: test ... 00:04:29.579 register 0x200000200000 2097152 00:04:29.579 malloc 3145728 00:04:29.579 register 0x200000400000 4194304 00:04:29.579 buf 0x2000004fffc0 len 3145728 PASSED 00:04:29.579 malloc 64 00:04:29.579 buf 0x2000004ffec0 len 64 PASSED 00:04:29.579 malloc 4194304 00:04:29.579 register 0x200000800000 6291456 00:04:29.579 buf 0x2000009fffc0 len 4194304 PASSED 00:04:29.579 free 0x2000004fffc0 3145728 00:04:29.579 free 0x2000004ffec0 64 00:04:29.579 unregister 0x200000400000 4194304 PASSED 00:04:29.579 free 0x2000009fffc0 4194304 00:04:29.579 unregister 0x200000800000 6291456 PASSED 00:04:29.579 malloc 8388608 00:04:29.579 register 0x200000400000 10485760 00:04:29.579 buf 0x2000005fffc0 len 8388608 PASSED 00:04:29.579 free 0x2000005fffc0 8388608 00:04:29.579 unregister 0x200000400000 10485760 PASSED 00:04:29.579 passed 00:04:29.579 00:04:29.579 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.579 suites 1 1 n/a 0 0 00:04:29.579 tests 1 1 1 0 0 00:04:29.579 asserts 15 15 15 0 n/a 00:04:29.579 00:04:29.579 Elapsed time = 0.061 seconds 00:04:29.579 00:04:29.579 real 0m0.200s 00:04:29.579 user 0m0.096s 00:04:29.579 sys 0m0.103s 00:04:29.579 10:37:08 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.579 10:37:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:29.579 ************************************ 00:04:29.579 END TEST env_mem_callbacks 00:04:29.579 ************************************ 00:04:29.579 00:04:29.579 real 0m14.177s 00:04:29.579 user 0m11.014s 00:04:29.579 sys 0m2.210s 00:04:29.579 10:37:08 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.579 10:37:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.579 ************************************ 00:04:29.579 END TEST env 00:04:29.579 ************************************ 00:04:29.579 10:37:08 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:29.579 10:37:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.579 10:37:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.579 10:37:08 -- common/autotest_common.sh@10 -- # set +x 00:04:29.579 ************************************ 00:04:29.579 START TEST rpc 00:04:29.579 ************************************ 00:04:29.579 10:37:08 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:29.837 * Looking for test storage... 00:04:29.837 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.837 10:37:08 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.837 10:37:08 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.837 10:37:08 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.837 10:37:08 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.837 10:37:08 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.837 10:37:08 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.837 10:37:08 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.837 10:37:08 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.837 10:37:08 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.837 10:37:08 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.837 10:37:08 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.837 10:37:08 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.837 10:37:08 rpc -- scripts/common.sh@345 -- # : 1 00:04:29.837 10:37:08 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.837 10:37:08 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.837 10:37:08 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.837 10:37:08 rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.837 10:37:08 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.837 10:37:08 rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.837 10:37:08 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.837 10:37:08 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.837 10:37:08 rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.837 10:37:08 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.837 10:37:08 rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.837 10:37:08 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.837 10:37:08 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.837 10:37:08 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.837 10:37:08 rpc -- scripts/common.sh@368 -- # return 0 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.837 --rc genhtml_branch_coverage=1 00:04:29.837 --rc genhtml_function_coverage=1 00:04:29.837 --rc genhtml_legend=1 00:04:29.837 --rc geninfo_all_blocks=1 00:04:29.837 --rc geninfo_unexecuted_blocks=1 00:04:29.837 00:04:29.837 ' 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.837 --rc genhtml_branch_coverage=1 00:04:29.837 --rc genhtml_function_coverage=1 00:04:29.837 --rc genhtml_legend=1 00:04:29.837 --rc geninfo_all_blocks=1 00:04:29.837 --rc geninfo_unexecuted_blocks=1 00:04:29.837 00:04:29.837 ' 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.837 --rc genhtml_branch_coverage=1 00:04:29.837 --rc genhtml_function_coverage=1 00:04:29.837 --rc genhtml_legend=1 00:04:29.837 --rc geninfo_all_blocks=1 00:04:29.837 --rc geninfo_unexecuted_blocks=1 00:04:29.837 00:04:29.837 ' 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.837 --rc genhtml_branch_coverage=1 00:04:29.837 --rc genhtml_function_coverage=1 00:04:29.837 --rc genhtml_legend=1 00:04:29.837 --rc geninfo_all_blocks=1 00:04:29.837 --rc geninfo_unexecuted_blocks=1 00:04:29.837 00:04:29.837 ' 00:04:29.837 10:37:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1466595 00:04:29.837 10:37:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.837 10:37:08 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:29.837 10:37:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1466595 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@835 -- # '[' -z 1466595 ']' 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.837 10:37:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.837 [2024-11-28 10:37:08.712764] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:29.837 [2024-11-28 10:37:08.712874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466595 ] 00:04:30.095 [2024-11-28 10:37:08.862010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.095 [2024-11-28 10:37:08.954089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:30.095 [2024-11-28 10:37:08.954140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1466595' to capture a snapshot of events at runtime. 00:04:30.095 [2024-11-28 10:37:08.954155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:30.095 [2024-11-28 10:37:08.954166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:30.095 [2024-11-28 10:37:08.954183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1466595 for offline analysis/debug. 00:04:30.095 [2024-11-28 10:37:08.955501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.030 10:37:09 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.030 10:37:09 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:31.030 10:37:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:31.030 10:37:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:31.030 10:37:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:31.030 10:37:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:31.030 10:37:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.030 10:37:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.030 10:37:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.030 ************************************ 00:04:31.030 START TEST rpc_integrity 00:04:31.030 ************************************ 00:04:31.030 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:31.030 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.030 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.030 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.030 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.031 { 00:04:31.031 "name": "Malloc0", 00:04:31.031 "aliases": [ 00:04:31.031 "2ee0b7d0-6f1b-4eb0-b38e-d72cee7acc2a" 00:04:31.031 ], 00:04:31.031 "product_name": "Malloc disk", 00:04:31.031 "block_size": 512, 00:04:31.031 "num_blocks": 16384, 00:04:31.031 "uuid": "2ee0b7d0-6f1b-4eb0-b38e-d72cee7acc2a", 00:04:31.031 "assigned_rate_limits": { 00:04:31.031 "rw_ios_per_sec": 0, 00:04:31.031 "rw_mbytes_per_sec": 0, 00:04:31.031 "r_mbytes_per_sec": 0, 00:04:31.031 "w_mbytes_per_sec": 0 00:04:31.031 }, 00:04:31.031 "claimed": false, 00:04:31.031 "zoned": false, 00:04:31.031 "supported_io_types": { 00:04:31.031 "read": true, 00:04:31.031 "write": true, 00:04:31.031 "unmap": true, 00:04:31.031 "flush": true, 00:04:31.031 "reset": true, 00:04:31.031 "nvme_admin": false, 00:04:31.031 "nvme_io": false, 00:04:31.031 "nvme_io_md": false, 00:04:31.031 "write_zeroes": true, 00:04:31.031 "zcopy": true, 00:04:31.031 "get_zone_info": false, 00:04:31.031 "zone_management": false, 00:04:31.031 "zone_append": false, 00:04:31.031 "compare": false, 00:04:31.031 "compare_and_write": false, 00:04:31.031 "abort": true, 00:04:31.031 "seek_hole": false, 00:04:31.031 "seek_data": false, 00:04:31.031 "copy": true, 00:04:31.031 "nvme_iov_md": false 00:04:31.031 }, 00:04:31.031 "memory_domains": [ 00:04:31.031 { 00:04:31.031 "dma_device_id": "system", 00:04:31.031 "dma_device_type": 1 00:04:31.031 }, 00:04:31.031 { 00:04:31.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.031 "dma_device_type": 2 00:04:31.031 } 00:04:31.031 ], 00:04:31.031 "driver_specific": {} 00:04:31.031 } 00:04:31.031 ]' 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.031 [2024-11-28 10:37:09.871261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:31.031 [2024-11-28 10:37:09.871312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.031 [2024-11-28 10:37:09.871339] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021680 00:04:31.031 [2024-11-28 10:37:09.871355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.031 [2024-11-28 10:37:09.873490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.031 [2024-11-28 10:37:09.873521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.031 Passthru0 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.031 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.031 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.031 { 00:04:31.031 "name": "Malloc0", 00:04:31.031 "aliases": [ 00:04:31.031 "2ee0b7d0-6f1b-4eb0-b38e-d72cee7acc2a" 00:04:31.031 ], 00:04:31.031 "product_name": "Malloc disk", 00:04:31.031 "block_size": 512, 00:04:31.031 "num_blocks": 16384, 00:04:31.031 "uuid": "2ee0b7d0-6f1b-4eb0-b38e-d72cee7acc2a", 00:04:31.031 "assigned_rate_limits": { 00:04:31.031 "rw_ios_per_sec": 0, 00:04:31.031 "rw_mbytes_per_sec": 0, 00:04:31.031 "r_mbytes_per_sec": 0, 00:04:31.031 "w_mbytes_per_sec": 0 00:04:31.031 }, 00:04:31.031 "claimed": true, 00:04:31.031 "claim_type": "exclusive_write", 00:04:31.031 "zoned": false, 00:04:31.031 "supported_io_types": { 00:04:31.031 "read": true, 00:04:31.031 "write": true, 00:04:31.031 "unmap": true, 00:04:31.031 "flush": true, 00:04:31.031 "reset": true, 00:04:31.031 "nvme_admin": false, 00:04:31.031 "nvme_io": false, 00:04:31.031 "nvme_io_md": false, 00:04:31.031 "write_zeroes": true, 00:04:31.031 "zcopy": true, 00:04:31.031 "get_zone_info": false, 00:04:31.031 "zone_management": false, 00:04:31.031 "zone_append": false, 00:04:31.031 "compare": false, 00:04:31.031 "compare_and_write": false, 00:04:31.031 "abort": true, 00:04:31.031 "seek_hole": false, 00:04:31.031 "seek_data": false, 00:04:31.031 "copy": true, 00:04:31.031 "nvme_iov_md": false 00:04:31.031 }, 00:04:31.031 "memory_domains": [ 00:04:31.031 { 00:04:31.031 "dma_device_id": "system", 00:04:31.031 "dma_device_type": 1 00:04:31.031 }, 00:04:31.031 { 00:04:31.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.031 "dma_device_type": 2 00:04:31.031 } 00:04:31.031 ], 00:04:31.031 "driver_specific": {} 00:04:31.031 }, 00:04:31.031 { 00:04:31.031 "name": "Passthru0", 00:04:31.031 "aliases": [ 00:04:31.031 "df1ae048-a354-5af9-8041-96753d3da62e" 00:04:31.031 ], 00:04:31.031 "product_name": "passthru", 00:04:31.031 "block_size": 512, 00:04:31.031 "num_blocks": 16384, 00:04:31.031 "uuid": "df1ae048-a354-5af9-8041-96753d3da62e", 00:04:31.031 "assigned_rate_limits": { 00:04:31.031 "rw_ios_per_sec": 0, 00:04:31.031 "rw_mbytes_per_sec": 0, 00:04:31.031 "r_mbytes_per_sec": 0, 00:04:31.031 "w_mbytes_per_sec": 0 00:04:31.031 }, 00:04:31.031 "claimed": false, 00:04:31.031 "zoned": false, 00:04:31.031 "supported_io_types": { 00:04:31.031 "read": true, 00:04:31.031 "write": true, 00:04:31.031 "unmap": true, 00:04:31.031 "flush": true, 00:04:31.031 "reset": true, 00:04:31.031 "nvme_admin": false, 00:04:31.031 "nvme_io": false, 00:04:31.031 "nvme_io_md": false, 00:04:31.031 "write_zeroes": true, 00:04:31.031 "zcopy": true, 00:04:31.031 "get_zone_info": false, 00:04:31.031 "zone_management": false, 00:04:31.031 "zone_append": false, 00:04:31.031 "compare": false, 00:04:31.031 "compare_and_write": false, 00:04:31.031 "abort": true, 00:04:31.031 "seek_hole": false, 00:04:31.031 "seek_data": false, 00:04:31.031 "copy": true, 00:04:31.031 "nvme_iov_md": false 00:04:31.031 }, 00:04:31.031 "memory_domains": [ 00:04:31.031 { 00:04:31.031 "dma_device_id": "system", 00:04:31.031 "dma_device_type": 1 00:04:31.031 }, 00:04:31.031 { 00:04:31.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.031 "dma_device_type": 2 00:04:31.031 } 00:04:31.031 ], 00:04:31.031 "driver_specific": { 00:04:31.031 "passthru": { 00:04:31.031 "name": "Passthru0", 00:04:31.031 "base_bdev_name": "Malloc0" 00:04:31.031 } 00:04:31.031 } 00:04:31.031 } 00:04:31.031 ]' 00:04:31.291 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.291 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.291 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.291 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.291 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.291 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.291 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:31.291 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.291 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.291 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.291 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.291 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.291 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.291 10:37:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.291 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.291 10:37:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.291 10:37:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.291 00:04:31.291 real 0m0.291s 00:04:31.291 user 0m0.147s 00:04:31.291 sys 0m0.051s 00:04:31.291 10:37:10 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.291 10:37:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.291 ************************************ 00:04:31.291 END TEST rpc_integrity 00:04:31.291 ************************************ 00:04:31.291 10:37:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:31.291 10:37:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.291 10:37:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.291 10:37:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.291 ************************************ 00:04:31.291 START TEST rpc_plugins 00:04:31.291 ************************************ 00:04:31.291 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:31.291 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:31.291 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.291 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.291 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.291 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:31.291 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:31.291 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.291 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.291 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.291 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:31.291 { 00:04:31.291 "name": "Malloc1", 00:04:31.291 "aliases": [ 00:04:31.291 "311557ad-970d-41bf-ab23-cfbf6700e5c4" 00:04:31.291 ], 00:04:31.291 "product_name": "Malloc disk", 00:04:31.291 "block_size": 4096, 00:04:31.291 "num_blocks": 256, 00:04:31.291 "uuid": "311557ad-970d-41bf-ab23-cfbf6700e5c4", 00:04:31.291 "assigned_rate_limits": { 00:04:31.291 "rw_ios_per_sec": 0, 00:04:31.291 "rw_mbytes_per_sec": 0, 00:04:31.291 "r_mbytes_per_sec": 0, 00:04:31.291 "w_mbytes_per_sec": 0 00:04:31.291 }, 00:04:31.291 "claimed": false, 00:04:31.291 "zoned": false, 00:04:31.291 "supported_io_types": { 00:04:31.291 "read": true, 00:04:31.291 "write": true, 00:04:31.291 "unmap": true, 00:04:31.291 "flush": true, 00:04:31.291 "reset": true, 00:04:31.291 "nvme_admin": false, 00:04:31.291 "nvme_io": false, 00:04:31.291 "nvme_io_md": false, 00:04:31.291 "write_zeroes": true, 00:04:31.291 "zcopy": true, 00:04:31.291 "get_zone_info": false, 00:04:31.291 "zone_management": false, 00:04:31.291 "zone_append": false, 00:04:31.291 "compare": false, 00:04:31.291 "compare_and_write": false, 00:04:31.291 "abort": true, 00:04:31.291 "seek_hole": false, 00:04:31.291 "seek_data": false, 00:04:31.291 "copy": true, 00:04:31.291 "nvme_iov_md": false 00:04:31.291 }, 00:04:31.291 "memory_domains": [ 00:04:31.291 { 00:04:31.291 "dma_device_id": "system", 00:04:31.291 "dma_device_type": 1 00:04:31.291 }, 00:04:31.291 { 00:04:31.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.291 "dma_device_type": 2 00:04:31.291 } 00:04:31.291 ], 00:04:31.291 "driver_specific": {} 00:04:31.291 } 00:04:31.291 ]' 00:04:31.291 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:31.550 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:31.550 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:31.550 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.550 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.550 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.550 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:31.550 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.550 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.550 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.550 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:31.550 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:31.550 10:37:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:31.550 00:04:31.550 real 0m0.150s 00:04:31.550 user 0m0.093s 00:04:31.550 sys 0m0.021s 00:04:31.550 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.550 10:37:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.550 ************************************ 00:04:31.550 END TEST rpc_plugins 00:04:31.550 ************************************ 00:04:31.550 10:37:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:31.550 10:37:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.550 10:37:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.550 10:37:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.550 ************************************ 00:04:31.550 START TEST rpc_trace_cmd_test 00:04:31.550 ************************************ 00:04:31.550 10:37:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:31.550 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:31.550 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:31.550 10:37:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.550 10:37:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:31.550 10:37:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.550 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:31.550 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1466595", 00:04:31.550 "tpoint_group_mask": "0x8", 00:04:31.550 "iscsi_conn": { 00:04:31.550 "mask": "0x2", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "scsi": { 00:04:31.550 "mask": "0x4", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "bdev": { 00:04:31.550 "mask": "0x8", 00:04:31.550 "tpoint_mask": "0xffffffffffffffff" 00:04:31.550 }, 00:04:31.550 "nvmf_rdma": { 00:04:31.550 "mask": "0x10", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "nvmf_tcp": { 00:04:31.550 "mask": "0x20", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "ftl": { 00:04:31.550 "mask": "0x40", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "blobfs": { 00:04:31.550 "mask": "0x80", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "dsa": { 00:04:31.550 "mask": "0x200", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "thread": { 00:04:31.550 "mask": "0x400", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "nvme_pcie": { 00:04:31.550 "mask": "0x800", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "iaa": { 00:04:31.550 "mask": "0x1000", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "nvme_tcp": { 00:04:31.550 "mask": "0x2000", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "bdev_nvme": { 00:04:31.550 "mask": "0x4000", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "sock": { 00:04:31.550 "mask": "0x8000", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "blob": { 00:04:31.550 "mask": "0x10000", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "bdev_raid": { 00:04:31.550 "mask": "0x20000", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 }, 00:04:31.550 "scheduler": { 00:04:31.550 "mask": "0x40000", 00:04:31.550 "tpoint_mask": "0x0" 00:04:31.550 } 00:04:31.550 }' 00:04:31.550 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:31.550 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:31.550 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:31.809 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:31.809 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:31.809 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:31.809 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:31.809 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:31.809 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:31.809 10:37:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:31.809 00:04:31.809 real 0m0.219s 00:04:31.809 user 0m0.177s 00:04:31.809 sys 0m0.036s 00:04:31.809 10:37:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.809 10:37:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:31.809 ************************************ 00:04:31.809 END TEST rpc_trace_cmd_test 00:04:31.809 ************************************ 00:04:31.809 10:37:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:31.809 10:37:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:31.809 10:37:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:31.809 10:37:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.809 10:37:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.809 10:37:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.809 ************************************ 00:04:31.809 START TEST rpc_daemon_integrity 00:04:31.809 ************************************ 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.809 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.068 { 00:04:32.068 "name": "Malloc2", 00:04:32.068 "aliases": [ 00:04:32.068 "cf921093-23a9-4025-a8bf-9c9dbd1f9954" 00:04:32.068 ], 00:04:32.068 "product_name": "Malloc disk", 00:04:32.068 "block_size": 512, 00:04:32.068 "num_blocks": 16384, 00:04:32.068 "uuid": "cf921093-23a9-4025-a8bf-9c9dbd1f9954", 00:04:32.068 "assigned_rate_limits": { 00:04:32.068 "rw_ios_per_sec": 0, 00:04:32.068 "rw_mbytes_per_sec": 0, 00:04:32.068 "r_mbytes_per_sec": 0, 00:04:32.068 "w_mbytes_per_sec": 0 00:04:32.068 }, 00:04:32.068 "claimed": false, 00:04:32.068 "zoned": false, 00:04:32.068 "supported_io_types": { 00:04:32.068 "read": true, 00:04:32.068 "write": true, 00:04:32.068 "unmap": true, 00:04:32.068 "flush": true, 00:04:32.068 "reset": true, 00:04:32.068 "nvme_admin": false, 00:04:32.068 "nvme_io": false, 00:04:32.068 "nvme_io_md": false, 00:04:32.068 "write_zeroes": true, 00:04:32.068 "zcopy": true, 00:04:32.068 "get_zone_info": false, 00:04:32.068 "zone_management": false, 00:04:32.068 "zone_append": false, 00:04:32.068 "compare": false, 00:04:32.068 "compare_and_write": false, 00:04:32.068 "abort": true, 00:04:32.068 "seek_hole": false, 00:04:32.068 "seek_data": false, 00:04:32.068 "copy": true, 00:04:32.068 "nvme_iov_md": false 00:04:32.068 }, 00:04:32.068 "memory_domains": [ 00:04:32.068 { 00:04:32.068 "dma_device_id": "system", 00:04:32.068 "dma_device_type": 1 00:04:32.068 }, 00:04:32.068 { 00:04:32.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.068 "dma_device_type": 2 00:04:32.068 } 00:04:32.068 ], 00:04:32.068 "driver_specific": {} 00:04:32.068 } 00:04:32.068 ]' 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.068 [2024-11-28 10:37:10.754120] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:32.068 [2024-11-28 10:37:10.754165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.068 [2024-11-28 10:37:10.754188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:32.068 [2024-11-28 10:37:10.754199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.068 [2024-11-28 10:37:10.756353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.068 [2024-11-28 10:37:10.756380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.068 Passthru0 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.068 { 00:04:32.068 "name": "Malloc2", 00:04:32.068 "aliases": [ 00:04:32.068 "cf921093-23a9-4025-a8bf-9c9dbd1f9954" 00:04:32.068 ], 00:04:32.068 "product_name": "Malloc disk", 00:04:32.068 "block_size": 512, 00:04:32.068 "num_blocks": 16384, 00:04:32.068 "uuid": "cf921093-23a9-4025-a8bf-9c9dbd1f9954", 00:04:32.068 "assigned_rate_limits": { 00:04:32.068 "rw_ios_per_sec": 0, 00:04:32.068 "rw_mbytes_per_sec": 0, 00:04:32.068 "r_mbytes_per_sec": 0, 00:04:32.068 "w_mbytes_per_sec": 0 00:04:32.068 }, 00:04:32.068 "claimed": true, 00:04:32.068 "claim_type": "exclusive_write", 00:04:32.068 "zoned": false, 00:04:32.068 "supported_io_types": { 00:04:32.068 "read": true, 00:04:32.068 "write": true, 00:04:32.068 "unmap": true, 00:04:32.068 "flush": true, 00:04:32.068 "reset": true, 00:04:32.068 "nvme_admin": false, 00:04:32.068 "nvme_io": false, 00:04:32.068 "nvme_io_md": false, 00:04:32.068 "write_zeroes": true, 00:04:32.068 "zcopy": true, 00:04:32.068 "get_zone_info": false, 00:04:32.068 "zone_management": false, 00:04:32.068 "zone_append": false, 00:04:32.068 "compare": false, 00:04:32.068 "compare_and_write": false, 00:04:32.068 "abort": true, 00:04:32.068 "seek_hole": false, 00:04:32.068 "seek_data": false, 00:04:32.068 "copy": true, 00:04:32.068 "nvme_iov_md": false 00:04:32.068 }, 00:04:32.068 "memory_domains": [ 00:04:32.068 { 00:04:32.068 "dma_device_id": "system", 00:04:32.068 "dma_device_type": 1 00:04:32.068 }, 00:04:32.068 { 00:04:32.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.068 "dma_device_type": 2 00:04:32.068 } 00:04:32.068 ], 00:04:32.068 "driver_specific": {} 00:04:32.068 }, 00:04:32.068 { 00:04:32.068 "name": "Passthru0", 00:04:32.068 "aliases": [ 00:04:32.068 "74516ef8-be61-5a51-889b-dc4f1eab92e0" 00:04:32.068 ], 00:04:32.068 "product_name": "passthru", 00:04:32.068 "block_size": 512, 00:04:32.068 "num_blocks": 16384, 00:04:32.068 "uuid": "74516ef8-be61-5a51-889b-dc4f1eab92e0", 00:04:32.068 "assigned_rate_limits": { 00:04:32.068 "rw_ios_per_sec": 0, 00:04:32.068 "rw_mbytes_per_sec": 0, 00:04:32.068 "r_mbytes_per_sec": 0, 00:04:32.068 "w_mbytes_per_sec": 0 00:04:32.068 }, 00:04:32.068 "claimed": false, 00:04:32.068 "zoned": false, 00:04:32.068 "supported_io_types": { 00:04:32.068 "read": true, 00:04:32.068 "write": true, 00:04:32.068 "unmap": true, 00:04:32.068 "flush": true, 00:04:32.068 "reset": true, 00:04:32.068 "nvme_admin": false, 00:04:32.068 "nvme_io": false, 00:04:32.068 "nvme_io_md": false, 00:04:32.068 "write_zeroes": true, 00:04:32.068 "zcopy": true, 00:04:32.068 "get_zone_info": false, 00:04:32.068 "zone_management": false, 00:04:32.068 "zone_append": false, 00:04:32.068 "compare": false, 00:04:32.068 "compare_and_write": false, 00:04:32.068 "abort": true, 00:04:32.068 "seek_hole": false, 00:04:32.068 "seek_data": false, 00:04:32.068 "copy": true, 00:04:32.068 "nvme_iov_md": false 00:04:32.068 }, 00:04:32.068 "memory_domains": [ 00:04:32.068 { 00:04:32.068 "dma_device_id": "system", 00:04:32.068 "dma_device_type": 1 00:04:32.068 }, 00:04:32.068 { 00:04:32.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.068 "dma_device_type": 2 00:04:32.068 } 00:04:32.068 ], 00:04:32.068 "driver_specific": { 00:04:32.068 "passthru": { 00:04:32.068 "name": "Passthru0", 00:04:32.068 "base_bdev_name": "Malloc2" 00:04:32.068 } 00:04:32.068 } 00:04:32.068 } 00:04:32.068 ]' 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.068 00:04:32.068 real 0m0.285s 00:04:32.068 user 0m0.152s 00:04:32.068 sys 0m0.045s 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.068 10:37:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.068 ************************************ 00:04:32.068 END TEST rpc_daemon_integrity 00:04:32.069 ************************************ 00:04:32.327 10:37:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:32.327 10:37:10 rpc -- rpc/rpc.sh@84 -- # killprocess 1466595 00:04:32.327 10:37:10 rpc -- common/autotest_common.sh@954 -- # '[' -z 1466595 ']' 00:04:32.327 10:37:10 rpc -- common/autotest_common.sh@958 -- # kill -0 1466595 00:04:32.327 10:37:10 rpc -- common/autotest_common.sh@959 -- # uname 00:04:32.327 10:37:10 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.327 10:37:10 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1466595 00:04:32.327 10:37:11 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.327 10:37:11 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.327 10:37:11 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1466595' 00:04:32.327 killing process with pid 1466595 00:04:32.327 10:37:11 rpc -- common/autotest_common.sh@973 -- # kill 1466595 00:04:32.327 10:37:11 rpc -- common/autotest_common.sh@978 -- # wait 1466595 00:04:34.858 00:04:34.859 real 0m4.796s 00:04:34.859 user 0m5.237s 00:04:34.859 sys 0m1.020s 00:04:34.859 10:37:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.859 10:37:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.859 ************************************ 00:04:34.859 END TEST rpc 00:04:34.859 ************************************ 00:04:34.859 10:37:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:34.859 10:37:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.859 10:37:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.859 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.859 ************************************ 00:04:34.859 START TEST skip_rpc 00:04:34.859 ************************************ 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:34.859 * Looking for test storage... 00:04:34.859 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.859 10:37:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.859 --rc genhtml_branch_coverage=1 00:04:34.859 --rc genhtml_function_coverage=1 00:04:34.859 --rc genhtml_legend=1 00:04:34.859 --rc geninfo_all_blocks=1 00:04:34.859 --rc geninfo_unexecuted_blocks=1 00:04:34.859 00:04:34.859 ' 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.859 --rc genhtml_branch_coverage=1 00:04:34.859 --rc genhtml_function_coverage=1 00:04:34.859 --rc genhtml_legend=1 00:04:34.859 --rc geninfo_all_blocks=1 00:04:34.859 --rc geninfo_unexecuted_blocks=1 00:04:34.859 00:04:34.859 ' 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.859 --rc genhtml_branch_coverage=1 00:04:34.859 --rc genhtml_function_coverage=1 00:04:34.859 --rc genhtml_legend=1 00:04:34.859 --rc geninfo_all_blocks=1 00:04:34.859 --rc geninfo_unexecuted_blocks=1 00:04:34.859 00:04:34.859 ' 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.859 --rc genhtml_branch_coverage=1 00:04:34.859 --rc genhtml_function_coverage=1 00:04:34.859 --rc genhtml_legend=1 00:04:34.859 --rc geninfo_all_blocks=1 00:04:34.859 --rc geninfo_unexecuted_blocks=1 00:04:34.859 00:04:34.859 ' 00:04:34.859 10:37:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:34.859 10:37:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:34.859 10:37:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.859 10:37:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.859 ************************************ 00:04:34.859 START TEST skip_rpc 00:04:34.859 ************************************ 00:04:34.859 10:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:34.859 10:37:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1467588 00:04:34.859 10:37:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.859 10:37:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:34.859 10:37:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:34.859 [2024-11-28 10:37:13.637697] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:34.859 [2024-11-28 10:37:13.637779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467588 ] 00:04:35.118 [2024-11-28 10:37:13.787617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.118 [2024-11-28 10:37:13.880845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1467588 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1467588 ']' 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1467588 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1467588 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1467588' 00:04:40.388 killing process with pid 1467588 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1467588 00:04:40.388 10:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1467588 00:04:42.292 00:04:42.292 real 0m7.301s 00:04:42.292 user 0m6.904s 00:04:42.292 sys 0m0.440s 00:04:42.292 10:37:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.292 10:37:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.292 ************************************ 00:04:42.292 END TEST skip_rpc 00:04:42.292 ************************************ 00:04:42.292 10:37:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.292 10:37:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.292 10:37:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.292 10:37:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.292 ************************************ 00:04:42.292 START TEST skip_rpc_with_json 00:04:42.292 ************************************ 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1468945 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1468945 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1468945 ']' 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.292 10:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.292 [2024-11-28 10:37:21.027393] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:42.292 [2024-11-28 10:37:21.027488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468945 ] 00:04:42.551 [2024-11-28 10:37:21.180546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.551 [2024-11-28 10:37:21.276535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.119 10:37:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.119 10:37:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:43.119 10:37:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:43.119 10:37:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.119 10:37:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.378 [2024-11-28 10:37:22.006558] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:43.378 request: 00:04:43.378 { 00:04:43.378 "trtype": "tcp", 00:04:43.378 "method": "nvmf_get_transports", 00:04:43.378 "req_id": 1 00:04:43.378 } 00:04:43.378 Got JSON-RPC error response 00:04:43.378 response: 00:04:43.378 { 00:04:43.378 "code": -19, 00:04:43.378 "message": "No such device" 00:04:43.378 } 00:04:43.378 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:43.378 10:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:43.378 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.378 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.378 [2024-11-28 10:37:22.018697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.378 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.378 10:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:43.378 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.378 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.378 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.378 10:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:43.378 { 00:04:43.378 "subsystems": [ 00:04:43.378 { 00:04:43.378 "subsystem": "fsdev", 00:04:43.378 "config": [ 00:04:43.378 { 00:04:43.378 "method": "fsdev_set_opts", 00:04:43.378 "params": { 00:04:43.378 "fsdev_io_pool_size": 65535, 00:04:43.378 "fsdev_io_cache_size": 256 00:04:43.378 } 00:04:43.378 } 00:04:43.378 ] 00:04:43.378 }, 00:04:43.378 { 00:04:43.378 "subsystem": "keyring", 00:04:43.378 "config": [] 00:04:43.378 }, 00:04:43.378 { 00:04:43.378 "subsystem": "iobuf", 00:04:43.378 "config": [ 00:04:43.378 { 00:04:43.378 "method": "iobuf_set_options", 00:04:43.378 "params": { 00:04:43.378 "small_pool_count": 8192, 00:04:43.378 "large_pool_count": 1024, 00:04:43.379 "small_bufsize": 8192, 00:04:43.379 "large_bufsize": 135168, 00:04:43.379 "enable_numa": false 00:04:43.379 } 00:04:43.379 } 00:04:43.379 ] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "sock", 00:04:43.379 "config": [ 00:04:43.379 { 00:04:43.379 "method": "sock_set_default_impl", 00:04:43.379 "params": { 00:04:43.379 "impl_name": "posix" 00:04:43.379 } 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "method": "sock_impl_set_options", 00:04:43.379 "params": { 00:04:43.379 "impl_name": "ssl", 00:04:43.379 "recv_buf_size": 4096, 00:04:43.379 "send_buf_size": 4096, 00:04:43.379 "enable_recv_pipe": true, 00:04:43.379 "enable_quickack": false, 00:04:43.379 "enable_placement_id": 0, 00:04:43.379 "enable_zerocopy_send_server": true, 00:04:43.379 "enable_zerocopy_send_client": false, 00:04:43.379 "zerocopy_threshold": 0, 00:04:43.379 "tls_version": 0, 00:04:43.379 "enable_ktls": false 00:04:43.379 } 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "method": "sock_impl_set_options", 00:04:43.379 "params": { 00:04:43.379 "impl_name": "posix", 00:04:43.379 "recv_buf_size": 2097152, 00:04:43.379 "send_buf_size": 2097152, 00:04:43.379 "enable_recv_pipe": true, 00:04:43.379 "enable_quickack": false, 00:04:43.379 "enable_placement_id": 0, 00:04:43.379 "enable_zerocopy_send_server": true, 00:04:43.379 "enable_zerocopy_send_client": false, 00:04:43.379 "zerocopy_threshold": 0, 00:04:43.379 "tls_version": 0, 00:04:43.379 "enable_ktls": false 00:04:43.379 } 00:04:43.379 } 00:04:43.379 ] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "vmd", 00:04:43.379 "config": [] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "accel", 00:04:43.379 "config": [ 00:04:43.379 { 00:04:43.379 "method": "accel_set_options", 00:04:43.379 "params": { 00:04:43.379 "small_cache_size": 128, 00:04:43.379 "large_cache_size": 16, 00:04:43.379 "task_count": 2048, 00:04:43.379 "sequence_count": 2048, 00:04:43.379 "buf_count": 2048 00:04:43.379 } 00:04:43.379 } 00:04:43.379 ] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "bdev", 00:04:43.379 "config": [ 00:04:43.379 { 00:04:43.379 "method": "bdev_set_options", 00:04:43.379 "params": { 00:04:43.379 "bdev_io_pool_size": 65535, 00:04:43.379 "bdev_io_cache_size": 256, 00:04:43.379 "bdev_auto_examine": true, 00:04:43.379 "iobuf_small_cache_size": 128, 00:04:43.379 "iobuf_large_cache_size": 16 00:04:43.379 } 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "method": "bdev_raid_set_options", 00:04:43.379 "params": { 00:04:43.379 "process_window_size_kb": 1024, 00:04:43.379 "process_max_bandwidth_mb_sec": 0 00:04:43.379 } 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "method": "bdev_iscsi_set_options", 00:04:43.379 "params": { 00:04:43.379 "timeout_sec": 30 00:04:43.379 } 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "method": "bdev_nvme_set_options", 00:04:43.379 "params": { 00:04:43.379 "action_on_timeout": "none", 00:04:43.379 "timeout_us": 0, 00:04:43.379 "timeout_admin_us": 0, 00:04:43.379 "keep_alive_timeout_ms": 10000, 00:04:43.379 "arbitration_burst": 0, 00:04:43.379 "low_priority_weight": 0, 00:04:43.379 "medium_priority_weight": 0, 00:04:43.379 "high_priority_weight": 0, 00:04:43.379 "nvme_adminq_poll_period_us": 10000, 00:04:43.379 "nvme_ioq_poll_period_us": 0, 00:04:43.379 "io_queue_requests": 0, 00:04:43.379 "delay_cmd_submit": true, 00:04:43.379 "transport_retry_count": 4, 00:04:43.379 "bdev_retry_count": 3, 00:04:43.379 "transport_ack_timeout": 0, 00:04:43.379 "ctrlr_loss_timeout_sec": 0, 00:04:43.379 "reconnect_delay_sec": 0, 00:04:43.379 "fast_io_fail_timeout_sec": 0, 00:04:43.379 "disable_auto_failback": false, 00:04:43.379 "generate_uuids": false, 00:04:43.379 "transport_tos": 0, 00:04:43.379 "nvme_error_stat": false, 00:04:43.379 "rdma_srq_size": 0, 00:04:43.379 "io_path_stat": false, 00:04:43.379 "allow_accel_sequence": false, 00:04:43.379 "rdma_max_cq_size": 0, 00:04:43.379 "rdma_cm_event_timeout_ms": 0, 00:04:43.379 "dhchap_digests": [ 00:04:43.379 "sha256", 00:04:43.379 "sha384", 00:04:43.379 "sha512" 00:04:43.379 ], 00:04:43.379 "dhchap_dhgroups": [ 00:04:43.379 "null", 00:04:43.379 "ffdhe2048", 00:04:43.379 "ffdhe3072", 00:04:43.379 "ffdhe4096", 00:04:43.379 "ffdhe6144", 00:04:43.379 "ffdhe8192" 00:04:43.379 ] 00:04:43.379 } 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "method": "bdev_nvme_set_hotplug", 00:04:43.379 "params": { 00:04:43.379 "period_us": 100000, 00:04:43.379 "enable": false 00:04:43.379 } 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "method": "bdev_wait_for_examine" 00:04:43.379 } 00:04:43.379 ] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "scsi", 00:04:43.379 "config": null 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "scheduler", 00:04:43.379 "config": [ 00:04:43.379 { 00:04:43.379 "method": "framework_set_scheduler", 00:04:43.379 "params": { 00:04:43.379 "name": "static" 00:04:43.379 } 00:04:43.379 } 00:04:43.379 ] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "vhost_scsi", 00:04:43.379 "config": [] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "vhost_blk", 00:04:43.379 "config": [] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "ublk", 00:04:43.379 "config": [] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "nbd", 00:04:43.379 "config": [] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "nvmf", 00:04:43.379 "config": [ 00:04:43.379 { 00:04:43.379 "method": "nvmf_set_config", 00:04:43.379 "params": { 00:04:43.379 "discovery_filter": "match_any", 00:04:43.379 "admin_cmd_passthru": { 00:04:43.379 "identify_ctrlr": false 00:04:43.379 }, 00:04:43.379 "dhchap_digests": [ 00:04:43.379 "sha256", 00:04:43.379 "sha384", 00:04:43.379 "sha512" 00:04:43.379 ], 00:04:43.379 "dhchap_dhgroups": [ 00:04:43.379 "null", 00:04:43.379 "ffdhe2048", 00:04:43.379 "ffdhe3072", 00:04:43.379 "ffdhe4096", 00:04:43.379 "ffdhe6144", 00:04:43.379 "ffdhe8192" 00:04:43.379 ] 00:04:43.379 } 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "method": "nvmf_set_max_subsystems", 00:04:43.379 "params": { 00:04:43.379 "max_subsystems": 1024 00:04:43.379 } 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "method": "nvmf_set_crdt", 00:04:43.379 "params": { 00:04:43.379 "crdt1": 0, 00:04:43.379 "crdt2": 0, 00:04:43.379 "crdt3": 0 00:04:43.379 } 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "method": "nvmf_create_transport", 00:04:43.379 "params": { 00:04:43.379 "trtype": "TCP", 00:04:43.379 "max_queue_depth": 128, 00:04:43.379 "max_io_qpairs_per_ctrlr": 127, 00:04:43.379 "in_capsule_data_size": 4096, 00:04:43.379 "max_io_size": 131072, 00:04:43.379 "io_unit_size": 131072, 00:04:43.379 "max_aq_depth": 128, 00:04:43.379 "num_shared_buffers": 511, 00:04:43.379 "buf_cache_size": 4294967295, 00:04:43.379 "dif_insert_or_strip": false, 00:04:43.379 "zcopy": false, 00:04:43.379 "c2h_success": true, 00:04:43.379 "sock_priority": 0, 00:04:43.379 "abort_timeout_sec": 1, 00:04:43.379 "ack_timeout": 0, 00:04:43.379 "data_wr_pool_size": 0 00:04:43.379 } 00:04:43.379 } 00:04:43.379 ] 00:04:43.379 }, 00:04:43.379 { 00:04:43.379 "subsystem": "iscsi", 00:04:43.379 "config": [ 00:04:43.379 { 00:04:43.379 "method": "iscsi_set_options", 00:04:43.379 "params": { 00:04:43.379 "node_base": "iqn.2016-06.io.spdk", 00:04:43.379 "max_sessions": 128, 00:04:43.379 "max_connections_per_session": 2, 00:04:43.379 "max_queue_depth": 64, 00:04:43.379 "default_time2wait": 2, 00:04:43.379 "default_time2retain": 20, 00:04:43.379 "first_burst_length": 8192, 00:04:43.379 "immediate_data": true, 00:04:43.379 "allow_duplicated_isid": false, 00:04:43.379 "error_recovery_level": 0, 00:04:43.379 "nop_timeout": 60, 00:04:43.379 "nop_in_interval": 30, 00:04:43.379 "disable_chap": false, 00:04:43.379 "require_chap": false, 00:04:43.379 "mutual_chap": false, 00:04:43.379 "chap_group": 0, 00:04:43.379 "max_large_datain_per_connection": 64, 00:04:43.379 "max_r2t_per_connection": 4, 00:04:43.379 "pdu_pool_size": 36864, 00:04:43.379 "immediate_data_pool_size": 16384, 00:04:43.379 "data_out_pool_size": 2048 00:04:43.379 } 00:04:43.379 } 00:04:43.379 ] 00:04:43.379 } 00:04:43.379 ] 00:04:43.379 } 00:04:43.379 10:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:43.379 10:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1468945 00:04:43.379 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1468945 ']' 00:04:43.379 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1468945 00:04:43.379 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:43.380 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.380 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1468945 00:04:43.638 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.638 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.638 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1468945' 00:04:43.638 killing process with pid 1468945 00:04:43.638 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1468945 00:04:43.638 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1468945 00:04:46.172 10:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1469509 00:04:46.172 10:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:46.172 10:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:51.440 10:37:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1469509 00:04:51.440 10:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1469509 ']' 00:04:51.440 10:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1469509 00:04:51.440 10:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:51.441 10:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.441 10:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1469509 00:04:51.441 10:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.441 10:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.441 10:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1469509' 00:04:51.441 killing process with pid 1469509 00:04:51.441 10:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1469509 00:04:51.441 10:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1469509 00:04:53.005 10:37:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:53.006 00:04:53.006 real 0m10.815s 00:04:53.006 user 0m10.281s 00:04:53.006 sys 0m1.036s 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.006 ************************************ 00:04:53.006 END TEST skip_rpc_with_json 00:04:53.006 ************************************ 00:04:53.006 10:37:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.006 10:37:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.006 10:37:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.006 10:37:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.006 ************************************ 00:04:53.006 START TEST skip_rpc_with_delay 00:04:53.006 ************************************ 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.006 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.264 [2024-11-28 10:37:31.929401] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.264 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:53.264 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.264 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:53.264 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.264 00:04:53.264 real 0m0.158s 00:04:53.264 user 0m0.083s 00:04:53.264 sys 0m0.074s 00:04:53.264 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.264 10:37:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.264 ************************************ 00:04:53.264 END TEST skip_rpc_with_delay 00:04:53.264 ************************************ 00:04:53.264 10:37:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.264 10:37:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.264 10:37:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.264 10:37:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.264 10:37:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.264 10:37:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.264 ************************************ 00:04:53.264 START TEST exit_on_failed_rpc_init 00:04:53.264 ************************************ 00:04:53.264 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:53.264 10:37:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1470906 00:04:53.264 10:37:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1470906 00:04:53.264 10:37:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.264 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1470906 ']' 00:04:53.264 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.264 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.264 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.264 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.264 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.523 [2024-11-28 10:37:32.180949] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:53.523 [2024-11-28 10:37:32.181031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470906 ] 00:04:53.523 [2024-11-28 10:37:32.331804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.783 [2024-11-28 10:37:32.427643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.351 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.610 [2024-11-28 10:37:33.243356] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:54.610 [2024-11-28 10:37:33.243448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471174 ] 00:04:54.610 [2024-11-28 10:37:33.394499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.869 [2024-11-28 10:37:33.494217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.869 [2024-11-28 10:37:33.494293] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:54.869 [2024-11-28 10:37:33.494315] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:54.869 [2024-11-28 10:37:33.494327] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.869 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:54.869 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.869 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:54.869 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:54.869 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:54.869 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.869 10:37:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:54.870 10:37:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1470906 00:04:54.870 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1470906 ']' 00:04:54.870 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1470906 00:04:54.870 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:54.870 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.870 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1470906 00:04:55.130 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.130 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.130 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1470906' 00:04:55.130 killing process with pid 1470906 00:04:55.130 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1470906 00:04:55.130 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1470906 00:04:57.669 00:04:57.669 real 0m3.915s 00:04:57.669 user 0m4.201s 00:04:57.669 sys 0m0.706s 00:04:57.669 10:37:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.669 10:37:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.669 ************************************ 00:04:57.669 END TEST exit_on_failed_rpc_init 00:04:57.669 ************************************ 00:04:57.669 10:37:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:57.669 00:04:57.669 real 0m22.733s 00:04:57.669 user 0m21.705s 00:04:57.669 sys 0m2.605s 00:04:57.669 10:37:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.669 10:37:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.669 ************************************ 00:04:57.669 END TEST skip_rpc 00:04:57.669 ************************************ 00:04:57.669 10:37:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.669 10:37:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.670 10:37:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.670 10:37:36 -- common/autotest_common.sh@10 -- # set +x 00:04:57.670 ************************************ 00:04:57.670 START TEST rpc_client 00:04:57.670 ************************************ 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.670 * Looking for test storage... 00:04:57.670 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.670 10:37:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.670 --rc genhtml_branch_coverage=1 00:04:57.670 --rc genhtml_function_coverage=1 00:04:57.670 --rc genhtml_legend=1 00:04:57.670 --rc geninfo_all_blocks=1 00:04:57.670 --rc geninfo_unexecuted_blocks=1 00:04:57.670 00:04:57.670 ' 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.670 --rc genhtml_branch_coverage=1 00:04:57.670 --rc genhtml_function_coverage=1 00:04:57.670 --rc genhtml_legend=1 00:04:57.670 --rc geninfo_all_blocks=1 00:04:57.670 --rc geninfo_unexecuted_blocks=1 00:04:57.670 00:04:57.670 ' 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.670 --rc genhtml_branch_coverage=1 00:04:57.670 --rc genhtml_function_coverage=1 00:04:57.670 --rc genhtml_legend=1 00:04:57.670 --rc geninfo_all_blocks=1 00:04:57.670 --rc geninfo_unexecuted_blocks=1 00:04:57.670 00:04:57.670 ' 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.670 --rc genhtml_branch_coverage=1 00:04:57.670 --rc genhtml_function_coverage=1 00:04:57.670 --rc genhtml_legend=1 00:04:57.670 --rc geninfo_all_blocks=1 00:04:57.670 --rc geninfo_unexecuted_blocks=1 00:04:57.670 00:04:57.670 ' 00:04:57.670 10:37:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:57.670 OK 00:04:57.670 10:37:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:57.670 00:04:57.670 real 0m0.261s 00:04:57.670 user 0m0.127s 00:04:57.670 sys 0m0.147s 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.670 10:37:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:57.670 ************************************ 00:04:57.670 END TEST rpc_client 00:04:57.670 ************************************ 00:04:57.670 10:37:36 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.670 10:37:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.670 10:37:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.670 10:37:36 -- common/autotest_common.sh@10 -- # set +x 00:04:57.670 ************************************ 00:04:57.670 START TEST json_config 00:04:57.670 ************************************ 00:04:57.670 10:37:36 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.670 10:37:36 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.670 10:37:36 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.670 10:37:36 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.930 10:37:36 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.930 10:37:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.930 10:37:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.930 10:37:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.930 10:37:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.930 10:37:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.930 10:37:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.930 10:37:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.930 10:37:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.930 10:37:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.930 10:37:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.930 10:37:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.930 10:37:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:57.930 10:37:36 json_config -- scripts/common.sh@345 -- # : 1 00:04:57.930 10:37:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.930 10:37:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.930 10:37:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:57.930 10:37:36 json_config -- scripts/common.sh@353 -- # local d=1 00:04:57.930 10:37:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.930 10:37:36 json_config -- scripts/common.sh@355 -- # echo 1 00:04:57.930 10:37:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.930 10:37:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:57.930 10:37:36 json_config -- scripts/common.sh@353 -- # local d=2 00:04:57.930 10:37:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.930 10:37:36 json_config -- scripts/common.sh@355 -- # echo 2 00:04:57.930 10:37:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.930 10:37:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.930 10:37:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.930 10:37:36 json_config -- scripts/common.sh@368 -- # return 0 00:04:57.930 10:37:36 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.930 10:37:36 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.930 --rc genhtml_branch_coverage=1 00:04:57.930 --rc genhtml_function_coverage=1 00:04:57.930 --rc genhtml_legend=1 00:04:57.930 --rc geninfo_all_blocks=1 00:04:57.930 --rc geninfo_unexecuted_blocks=1 00:04:57.930 00:04:57.930 ' 00:04:57.930 10:37:36 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.930 --rc genhtml_branch_coverage=1 00:04:57.930 --rc genhtml_function_coverage=1 00:04:57.930 --rc genhtml_legend=1 00:04:57.930 --rc geninfo_all_blocks=1 00:04:57.930 --rc geninfo_unexecuted_blocks=1 00:04:57.930 00:04:57.930 ' 00:04:57.930 10:37:36 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.930 --rc genhtml_branch_coverage=1 00:04:57.930 --rc genhtml_function_coverage=1 00:04:57.930 --rc genhtml_legend=1 00:04:57.930 --rc geninfo_all_blocks=1 00:04:57.930 --rc geninfo_unexecuted_blocks=1 00:04:57.930 00:04:57.930 ' 00:04:57.930 10:37:36 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.930 --rc genhtml_branch_coverage=1 00:04:57.930 --rc genhtml_function_coverage=1 00:04:57.930 --rc genhtml_legend=1 00:04:57.930 --rc geninfo_all_blocks=1 00:04:57.930 --rc geninfo_unexecuted_blocks=1 00:04:57.930 00:04:57.930 ' 00:04:57.930 10:37:36 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:57.930 10:37:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.930 10:37:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.930 10:37:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.930 10:37:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.930 10:37:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.930 10:37:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.930 10:37:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.930 10:37:36 json_config -- paths/export.sh@5 -- # export PATH 00:04:57.930 10:37:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@51 -- # : 0 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.930 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:57.930 10:37:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:57.931 INFO: JSON configuration test init 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:57.931 10:37:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.931 10:37:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:57.931 10:37:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.931 10:37:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.931 10:37:36 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:57.931 10:37:36 json_config -- json_config/common.sh@9 -- # local app=target 00:04:57.931 10:37:36 json_config -- json_config/common.sh@10 -- # shift 00:04:57.931 10:37:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:57.931 10:37:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:57.931 10:37:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:57.931 10:37:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.931 10:37:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.931 10:37:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1471840 00:04:57.931 10:37:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:57.931 Waiting for target to run... 00:04:57.931 10:37:36 json_config -- json_config/common.sh@25 -- # waitforlisten 1471840 /var/tmp/spdk_tgt.sock 00:04:57.931 10:37:36 json_config -- common/autotest_common.sh@835 -- # '[' -z 1471840 ']' 00:04:57.931 10:37:36 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:57.931 10:37:36 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.931 10:37:36 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:57.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:57.931 10:37:36 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.931 10:37:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.931 10:37:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:57.931 [2024-11-28 10:37:36.772404] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:57.931 [2024-11-28 10:37:36.772500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471840 ] 00:04:58.499 [2024-11-28 10:37:37.140750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.499 [2024-11-28 10:37:37.233782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.758 10:37:37 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.758 10:37:37 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:58.758 10:37:37 json_config -- json_config/common.sh@26 -- # echo '' 00:04:58.758 00:04:58.758 10:37:37 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:58.758 10:37:37 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:58.758 10:37:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.758 10:37:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.758 10:37:37 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:58.758 10:37:37 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:58.758 10:37:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.758 10:37:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.758 10:37:37 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:58.758 10:37:37 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:58.758 10:37:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:02.946 10:37:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.946 10:37:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:02.946 10:37:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:02.946 10:37:41 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@54 -- # sort 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:02.947 10:37:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.947 10:37:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:02.947 10:37:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.947 10:37:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:05:02.947 10:37:41 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:05:02.947 10:37:41 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:02.947 10:37:41 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:02.947 10:37:41 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:02.947 10:37:41 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:02.947 10:37:41 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:02.947 10:37:41 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:02.947 10:37:41 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:02.947 10:37:41 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:02.947 10:37:41 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:05:02.947 10:37:41 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:02.947 10:37:41 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:05:02.947 10:37:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@320 -- # e810=() 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@321 -- # x722=() 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@322 -- # mlx=() 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:11.061 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:11.061 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:11.061 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:11.061 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:05:11.061 10:37:49 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@62 -- # uname 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:11.320 10:37:49 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:11.320 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:11.320 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:11.320 altname enp217s0f0np0 00:05:11.320 altname ens818f0np0 00:05:11.320 inet 192.168.100.8/24 scope global mlx_0_0 00:05:11.320 valid_lft forever preferred_lft forever 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:11.320 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:11.320 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:11.320 altname enp217s0f1np1 00:05:11.320 altname ens818f1np1 00:05:11.320 inet 192.168.100.9/24 scope global mlx_0_1 00:05:11.320 valid_lft forever preferred_lft forever 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@450 -- # return 0 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:11.320 10:37:50 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:11.321 192.168.100.9' 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:11.321 192.168.100.9' 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@485 -- # head -n 1 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:11.321 192.168.100.9' 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@486 -- # head -n 1 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:11.321 10:37:50 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:11.321 10:37:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:05:11.321 10:37:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:11.321 10:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:11.580 MallocForNvmf0 00:05:11.580 10:37:50 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:11.580 10:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:11.839 MallocForNvmf1 00:05:11.839 10:37:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:11.839 10:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:12.097 [2024-11-28 10:37:50.729559] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:12.097 [2024-11-28 10:37:50.764567] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029440/0x7f5bc2c31940) succeed. 00:05:12.097 [2024-11-28 10:37:50.777640] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000295c0/0x7f5bc23bd940) succeed. 00:05:12.097 10:37:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:12.097 10:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:12.355 10:37:51 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:12.355 10:37:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:12.355 10:37:51 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:12.355 10:37:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:12.613 10:37:51 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:12.613 10:37:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:12.871 [2024-11-28 10:37:51.542470] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:12.871 10:37:51 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:12.871 10:37:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.871 10:37:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.871 10:37:51 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:12.871 10:37:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.871 10:37:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.871 10:37:51 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:12.871 10:37:51 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.871 10:37:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:13.129 MallocBdevForConfigChangeCheck 00:05:13.129 10:37:51 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:13.129 10:37:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.129 10:37:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.129 10:37:51 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:13.129 10:37:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.387 10:37:52 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:13.387 INFO: shutting down applications... 00:05:13.387 10:37:52 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:13.387 10:37:52 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:13.387 10:37:52 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:13.387 10:37:52 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:15.921 Calling clear_iscsi_subsystem 00:05:15.921 Calling clear_nvmf_subsystem 00:05:15.921 Calling clear_nbd_subsystem 00:05:15.921 Calling clear_ublk_subsystem 00:05:15.921 Calling clear_vhost_blk_subsystem 00:05:15.921 Calling clear_vhost_scsi_subsystem 00:05:15.921 Calling clear_bdev_subsystem 00:05:15.921 10:37:54 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:15.921 10:37:54 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:15.921 10:37:54 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:15.921 10:37:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.921 10:37:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:15.921 10:37:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:16.488 10:37:55 json_config -- json_config/json_config.sh@352 -- # break 00:05:16.488 10:37:55 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:16.488 10:37:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:16.488 10:37:55 json_config -- json_config/common.sh@31 -- # local app=target 00:05:16.488 10:37:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.488 10:37:55 json_config -- json_config/common.sh@35 -- # [[ -n 1471840 ]] 00:05:16.488 10:37:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1471840 00:05:16.488 10:37:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.488 10:37:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.488 10:37:55 json_config -- json_config/common.sh@41 -- # kill -0 1471840 00:05:16.488 10:37:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.746 10:37:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.746 10:37:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.746 10:37:55 json_config -- json_config/common.sh@41 -- # kill -0 1471840 00:05:16.746 10:37:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.315 10:37:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.315 10:37:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.315 10:37:56 json_config -- json_config/common.sh@41 -- # kill -0 1471840 00:05:17.315 10:37:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.315 10:37:56 json_config -- json_config/common.sh@43 -- # break 00:05:17.315 10:37:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.315 10:37:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.315 SPDK target shutdown done 00:05:17.315 10:37:56 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:17.315 INFO: relaunching applications... 00:05:17.315 10:37:56 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.315 10:37:56 json_config -- json_config/common.sh@9 -- # local app=target 00:05:17.315 10:37:56 json_config -- json_config/common.sh@10 -- # shift 00:05:17.315 10:37:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.315 10:37:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.315 10:37:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.315 10:37:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.315 10:37:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.315 10:37:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1477949 00:05:17.315 10:37:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.315 Waiting for target to run... 00:05:17.315 10:37:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.315 10:37:56 json_config -- json_config/common.sh@25 -- # waitforlisten 1477949 /var/tmp/spdk_tgt.sock 00:05:17.315 10:37:56 json_config -- common/autotest_common.sh@835 -- # '[' -z 1477949 ']' 00:05:17.315 10:37:56 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.315 10:37:56 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.315 10:37:56 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.315 10:37:56 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.315 10:37:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.574 [2024-11-28 10:37:56.210587] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:17.574 [2024-11-28 10:37:56.210694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477949 ] 00:05:18.141 [2024-11-28 10:37:56.731339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.141 [2024-11-28 10:37:56.831589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.333 [2024-11-28 10:38:00.476638] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029bc0/0x7f9aa9dbd940) succeed. 00:05:22.333 [2024-11-28 10:38:00.488127] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029d40/0x7f9aa9d79940) succeed. 00:05:22.333 [2024-11-28 10:38:00.549827] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:22.333 10:38:00 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.333 10:38:00 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:22.333 10:38:00 json_config -- json_config/common.sh@26 -- # echo '' 00:05:22.333 00:05:22.333 10:38:00 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:22.333 10:38:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:22.333 INFO: Checking if target configuration is the same... 00:05:22.333 10:38:00 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.333 10:38:00 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:22.333 10:38:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.333 + '[' 2 -ne 2 ']' 00:05:22.333 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:22.333 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:22.333 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:22.333 +++ basename /dev/fd/62 00:05:22.333 ++ mktemp /tmp/62.XXX 00:05:22.333 + tmp_file_1=/tmp/62.iEA 00:05:22.333 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.333 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.333 + tmp_file_2=/tmp/spdk_tgt_config.json.qKL 00:05:22.333 + ret=0 00:05:22.333 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.333 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.333 + diff -u /tmp/62.iEA /tmp/spdk_tgt_config.json.qKL 00:05:22.333 + echo 'INFO: JSON config files are the same' 00:05:22.333 INFO: JSON config files are the same 00:05:22.333 + rm /tmp/62.iEA /tmp/spdk_tgt_config.json.qKL 00:05:22.333 + exit 0 00:05:22.333 10:38:00 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:22.333 10:38:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:22.333 INFO: changing configuration and checking if this can be detected... 00:05:22.333 10:38:00 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.333 10:38:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.333 10:38:01 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.333 10:38:01 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:22.333 10:38:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.333 + '[' 2 -ne 2 ']' 00:05:22.333 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:22.333 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:22.333 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:22.333 +++ basename /dev/fd/62 00:05:22.333 ++ mktemp /tmp/62.XXX 00:05:22.333 + tmp_file_1=/tmp/62.B4T 00:05:22.333 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.333 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.333 + tmp_file_2=/tmp/spdk_tgt_config.json.Kl9 00:05:22.333 + ret=0 00:05:22.333 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.593 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.851 + diff -u /tmp/62.B4T /tmp/spdk_tgt_config.json.Kl9 00:05:22.851 + ret=1 00:05:22.851 + echo '=== Start of file: /tmp/62.B4T ===' 00:05:22.851 + cat /tmp/62.B4T 00:05:22.851 + echo '=== End of file: /tmp/62.B4T ===' 00:05:22.851 + echo '' 00:05:22.851 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Kl9 ===' 00:05:22.851 + cat /tmp/spdk_tgt_config.json.Kl9 00:05:22.851 + echo '=== End of file: /tmp/spdk_tgt_config.json.Kl9 ===' 00:05:22.851 + echo '' 00:05:22.851 + rm /tmp/62.B4T /tmp/spdk_tgt_config.json.Kl9 00:05:22.851 + exit 1 00:05:22.851 10:38:01 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:22.851 INFO: configuration change detected. 00:05:22.851 10:38:01 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:22.851 10:38:01 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:22.851 10:38:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@324 -- # [[ -n 1477949 ]] 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.852 10:38:01 json_config -- json_config/json_config.sh@330 -- # killprocess 1477949 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@954 -- # '[' -z 1477949 ']' 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@958 -- # kill -0 1477949 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@959 -- # uname 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1477949 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1477949' 00:05:22.852 killing process with pid 1477949 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@973 -- # kill 1477949 00:05:22.852 10:38:01 json_config -- common/autotest_common.sh@978 -- # wait 1477949 00:05:26.140 10:38:04 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.140 10:38:04 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:26.140 10:38:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.140 10:38:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.140 10:38:05 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:26.140 10:38:05 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:26.140 INFO: Success 00:05:26.140 10:38:05 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:26.140 10:38:05 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:26.140 10:38:05 json_config -- nvmf/common.sh@121 -- # sync 00:05:26.140 10:38:05 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:05:26.140 10:38:05 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:05:26.140 10:38:05 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:05:26.140 10:38:05 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:26.140 10:38:05 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:05:26.140 00:05:26.140 real 0m28.545s 00:05:26.140 user 0m30.873s 00:05:26.140 sys 0m9.872s 00:05:26.140 10:38:05 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.140 10:38:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.140 ************************************ 00:05:26.140 END TEST json_config 00:05:26.140 ************************************ 00:05:26.399 10:38:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.399 10:38:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.399 10:38:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.399 10:38:05 -- common/autotest_common.sh@10 -- # set +x 00:05:26.399 ************************************ 00:05:26.399 START TEST json_config_extra_key 00:05:26.399 ************************************ 00:05:26.399 10:38:05 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.399 10:38:05 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.399 10:38:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.399 10:38:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.399 10:38:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:26.399 10:38:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.399 10:38:05 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.399 --rc genhtml_branch_coverage=1 00:05:26.399 --rc genhtml_function_coverage=1 00:05:26.399 --rc genhtml_legend=1 00:05:26.399 --rc geninfo_all_blocks=1 00:05:26.399 --rc geninfo_unexecuted_blocks=1 00:05:26.399 00:05:26.399 ' 00:05:26.399 10:38:05 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.399 --rc genhtml_branch_coverage=1 00:05:26.399 --rc genhtml_function_coverage=1 00:05:26.399 --rc genhtml_legend=1 00:05:26.399 --rc geninfo_all_blocks=1 00:05:26.399 --rc geninfo_unexecuted_blocks=1 00:05:26.399 00:05:26.399 ' 00:05:26.399 10:38:05 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.399 --rc genhtml_branch_coverage=1 00:05:26.399 --rc genhtml_function_coverage=1 00:05:26.399 --rc genhtml_legend=1 00:05:26.399 --rc geninfo_all_blocks=1 00:05:26.399 --rc geninfo_unexecuted_blocks=1 00:05:26.399 00:05:26.399 ' 00:05:26.399 10:38:05 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.399 --rc genhtml_branch_coverage=1 00:05:26.399 --rc genhtml_function_coverage=1 00:05:26.399 --rc genhtml_legend=1 00:05:26.399 --rc geninfo_all_blocks=1 00:05:26.399 --rc geninfo_unexecuted_blocks=1 00:05:26.399 00:05:26.399 ' 00:05:26.399 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.399 10:38:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.399 10:38:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.399 10:38:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.399 10:38:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.399 10:38:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:26.399 10:38:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.399 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.399 10:38:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.400 10:38:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.400 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:26.400 INFO: launching applications... 00:05:26.659 10:38:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1479675 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.659 Waiting for target to run... 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1479675 /var/tmp/spdk_tgt.sock 00:05:26.659 10:38:05 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1479675 ']' 00:05:26.659 10:38:05 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.659 10:38:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.659 10:38:05 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.659 10:38:05 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.659 10:38:05 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.659 10:38:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.659 [2024-11-28 10:38:05.375694] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:26.659 [2024-11-28 10:38:05.375785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479675 ] 00:05:26.918 [2024-11-28 10:38:05.747423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.176 [2024-11-28 10:38:05.839252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.761 10:38:06 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.761 10:38:06 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:27.761 10:38:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:27.761 00:05:27.761 10:38:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:27.761 INFO: shutting down applications... 00:05:27.761 10:38:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:27.761 10:38:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:27.761 10:38:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.761 10:38:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1479675 ]] 00:05:27.761 10:38:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1479675 00:05:27.761 10:38:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.761 10:38:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.761 10:38:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1479675 00:05:27.761 10:38:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.331 10:38:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.332 10:38:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.332 10:38:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1479675 00:05:28.332 10:38:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.590 10:38:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.590 10:38:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.590 10:38:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1479675 00:05:28.590 10:38:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.156 10:38:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.156 10:38:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.156 10:38:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1479675 00:05:29.156 10:38:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.722 10:38:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.722 10:38:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.722 10:38:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1479675 00:05:29.722 10:38:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.288 10:38:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.288 10:38:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.288 10:38:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1479675 00:05:30.288 10:38:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:30.288 10:38:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:30.288 10:38:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:30.288 10:38:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:30.288 SPDK target shutdown done 00:05:30.288 10:38:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:30.288 Success 00:05:30.288 00:05:30.288 real 0m3.886s 00:05:30.288 user 0m3.591s 00:05:30.288 sys 0m0.652s 00:05:30.288 10:38:08 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.288 10:38:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:30.288 ************************************ 00:05:30.288 END TEST json_config_extra_key 00:05:30.288 ************************************ 00:05:30.288 10:38:09 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.288 10:38:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.288 10:38:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.288 10:38:09 -- common/autotest_common.sh@10 -- # set +x 00:05:30.288 ************************************ 00:05:30.288 START TEST alias_rpc 00:05:30.288 ************************************ 00:05:30.288 10:38:09 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.288 * Looking for test storage... 00:05:30.288 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:30.288 10:38:09 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.546 10:38:09 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.546 10:38:09 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.546 10:38:09 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.546 10:38:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:30.546 10:38:09 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.546 10:38:09 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.546 --rc genhtml_branch_coverage=1 00:05:30.546 --rc genhtml_function_coverage=1 00:05:30.546 --rc genhtml_legend=1 00:05:30.546 --rc geninfo_all_blocks=1 00:05:30.546 --rc geninfo_unexecuted_blocks=1 00:05:30.546 00:05:30.546 ' 00:05:30.546 10:38:09 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.546 --rc genhtml_branch_coverage=1 00:05:30.546 --rc genhtml_function_coverage=1 00:05:30.546 --rc genhtml_legend=1 00:05:30.546 --rc geninfo_all_blocks=1 00:05:30.546 --rc geninfo_unexecuted_blocks=1 00:05:30.546 00:05:30.546 ' 00:05:30.546 10:38:09 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.546 --rc genhtml_branch_coverage=1 00:05:30.546 --rc genhtml_function_coverage=1 00:05:30.546 --rc genhtml_legend=1 00:05:30.546 --rc geninfo_all_blocks=1 00:05:30.546 --rc geninfo_unexecuted_blocks=1 00:05:30.546 00:05:30.546 ' 00:05:30.546 10:38:09 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.546 --rc genhtml_branch_coverage=1 00:05:30.546 --rc genhtml_function_coverage=1 00:05:30.546 --rc genhtml_legend=1 00:05:30.546 --rc geninfo_all_blocks=1 00:05:30.546 --rc geninfo_unexecuted_blocks=1 00:05:30.546 00:05:30.546 ' 00:05:30.546 10:38:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:30.546 10:38:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1480288 00:05:30.546 10:38:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1480288 00:05:30.547 10:38:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.547 10:38:09 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1480288 ']' 00:05:30.547 10:38:09 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.547 10:38:09 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.547 10:38:09 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.547 10:38:09 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.547 10:38:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.547 [2024-11-28 10:38:09.351750] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:30.547 [2024-11-28 10:38:09.351846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480288 ] 00:05:30.805 [2024-11-28 10:38:09.506243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.805 [2024-11-28 10:38:09.602064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.739 10:38:10 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.739 10:38:10 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.739 10:38:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:31.739 10:38:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1480288 00:05:31.739 10:38:10 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1480288 ']' 00:05:31.739 10:38:10 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1480288 00:05:31.739 10:38:10 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:31.739 10:38:10 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.739 10:38:10 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480288 00:05:31.997 10:38:10 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.997 10:38:10 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.997 10:38:10 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480288' 00:05:31.997 killing process with pid 1480288 00:05:31.997 10:38:10 alias_rpc -- common/autotest_common.sh@973 -- # kill 1480288 00:05:31.997 10:38:10 alias_rpc -- common/autotest_common.sh@978 -- # wait 1480288 00:05:34.533 00:05:34.533 real 0m3.789s 00:05:34.533 user 0m3.802s 00:05:34.533 sys 0m0.624s 00:05:34.533 10:38:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.533 10:38:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.533 ************************************ 00:05:34.533 END TEST alias_rpc 00:05:34.533 ************************************ 00:05:34.533 10:38:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:34.533 10:38:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.533 10:38:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.533 10:38:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.533 10:38:12 -- common/autotest_common.sh@10 -- # set +x 00:05:34.533 ************************************ 00:05:34.533 START TEST spdkcli_tcp 00:05:34.533 ************************************ 00:05:34.533 10:38:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.533 * Looking for test storage... 00:05:34.533 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.533 10:38:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.533 --rc genhtml_branch_coverage=1 00:05:34.533 --rc genhtml_function_coverage=1 00:05:34.533 --rc genhtml_legend=1 00:05:34.533 --rc geninfo_all_blocks=1 00:05:34.533 --rc geninfo_unexecuted_blocks=1 00:05:34.533 00:05:34.533 ' 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.533 --rc genhtml_branch_coverage=1 00:05:34.533 --rc genhtml_function_coverage=1 00:05:34.533 --rc genhtml_legend=1 00:05:34.533 --rc geninfo_all_blocks=1 00:05:34.533 --rc geninfo_unexecuted_blocks=1 00:05:34.533 00:05:34.533 ' 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.533 --rc genhtml_branch_coverage=1 00:05:34.533 --rc genhtml_function_coverage=1 00:05:34.533 --rc genhtml_legend=1 00:05:34.533 --rc geninfo_all_blocks=1 00:05:34.533 --rc geninfo_unexecuted_blocks=1 00:05:34.533 00:05:34.533 ' 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.533 --rc genhtml_branch_coverage=1 00:05:34.533 --rc genhtml_function_coverage=1 00:05:34.533 --rc genhtml_legend=1 00:05:34.533 --rc geninfo_all_blocks=1 00:05:34.533 --rc geninfo_unexecuted_blocks=1 00:05:34.533 00:05:34.533 ' 00:05:34.533 10:38:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:34.533 10:38:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:34.533 10:38:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:34.533 10:38:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:34.533 10:38:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:34.533 10:38:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:34.533 10:38:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.533 10:38:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1481132 00:05:34.533 10:38:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1481132 00:05:34.533 10:38:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1481132 ']' 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.533 10:38:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.533 [2024-11-28 10:38:13.225819] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:34.533 [2024-11-28 10:38:13.225929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481132 ] 00:05:34.533 [2024-11-28 10:38:13.375734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.792 [2024-11-28 10:38:13.475808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.792 [2024-11-28 10:38:13.475817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.729 10:38:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.729 10:38:14 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:35.729 10:38:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1481266 00:05:35.729 10:38:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:35.729 10:38:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:35.729 [ 00:05:35.729 "bdev_malloc_delete", 00:05:35.729 "bdev_malloc_create", 00:05:35.729 "bdev_null_resize", 00:05:35.729 "bdev_null_delete", 00:05:35.729 "bdev_null_create", 00:05:35.729 "bdev_nvme_cuse_unregister", 00:05:35.729 "bdev_nvme_cuse_register", 00:05:35.729 "bdev_opal_new_user", 00:05:35.729 "bdev_opal_set_lock_state", 00:05:35.729 "bdev_opal_delete", 00:05:35.729 "bdev_opal_get_info", 00:05:35.729 "bdev_opal_create", 00:05:35.729 "bdev_nvme_opal_revert", 00:05:35.729 "bdev_nvme_opal_init", 00:05:35.729 "bdev_nvme_send_cmd", 00:05:35.729 "bdev_nvme_set_keys", 00:05:35.729 "bdev_nvme_get_path_iostat", 00:05:35.729 "bdev_nvme_get_mdns_discovery_info", 00:05:35.729 "bdev_nvme_stop_mdns_discovery", 00:05:35.729 "bdev_nvme_start_mdns_discovery", 00:05:35.729 "bdev_nvme_set_multipath_policy", 00:05:35.729 "bdev_nvme_set_preferred_path", 00:05:35.729 "bdev_nvme_get_io_paths", 00:05:35.729 "bdev_nvme_remove_error_injection", 00:05:35.729 "bdev_nvme_add_error_injection", 00:05:35.729 "bdev_nvme_get_discovery_info", 00:05:35.729 "bdev_nvme_stop_discovery", 00:05:35.729 "bdev_nvme_start_discovery", 00:05:35.729 "bdev_nvme_get_controller_health_info", 00:05:35.729 "bdev_nvme_disable_controller", 00:05:35.729 "bdev_nvme_enable_controller", 00:05:35.729 "bdev_nvme_reset_controller", 00:05:35.729 "bdev_nvme_get_transport_statistics", 00:05:35.729 "bdev_nvme_apply_firmware", 00:05:35.729 "bdev_nvme_detach_controller", 00:05:35.729 "bdev_nvme_get_controllers", 00:05:35.729 "bdev_nvme_attach_controller", 00:05:35.729 "bdev_nvme_set_hotplug", 00:05:35.729 "bdev_nvme_set_options", 00:05:35.729 "bdev_passthru_delete", 00:05:35.729 "bdev_passthru_create", 00:05:35.729 "bdev_lvol_set_parent_bdev", 00:05:35.729 "bdev_lvol_set_parent", 00:05:35.729 "bdev_lvol_check_shallow_copy", 00:05:35.729 "bdev_lvol_start_shallow_copy", 00:05:35.729 "bdev_lvol_grow_lvstore", 00:05:35.729 "bdev_lvol_get_lvols", 00:05:35.729 "bdev_lvol_get_lvstores", 00:05:35.729 "bdev_lvol_delete", 00:05:35.729 "bdev_lvol_set_read_only", 00:05:35.729 "bdev_lvol_resize", 00:05:35.729 "bdev_lvol_decouple_parent", 00:05:35.729 "bdev_lvol_inflate", 00:05:35.729 "bdev_lvol_rename", 00:05:35.729 "bdev_lvol_clone_bdev", 00:05:35.729 "bdev_lvol_clone", 00:05:35.729 "bdev_lvol_snapshot", 00:05:35.729 "bdev_lvol_create", 00:05:35.729 "bdev_lvol_delete_lvstore", 00:05:35.729 "bdev_lvol_rename_lvstore", 00:05:35.729 "bdev_lvol_create_lvstore", 00:05:35.729 "bdev_raid_set_options", 00:05:35.729 "bdev_raid_remove_base_bdev", 00:05:35.729 "bdev_raid_add_base_bdev", 00:05:35.729 "bdev_raid_delete", 00:05:35.729 "bdev_raid_create", 00:05:35.729 "bdev_raid_get_bdevs", 00:05:35.729 "bdev_error_inject_error", 00:05:35.729 "bdev_error_delete", 00:05:35.729 "bdev_error_create", 00:05:35.729 "bdev_split_delete", 00:05:35.729 "bdev_split_create", 00:05:35.729 "bdev_delay_delete", 00:05:35.729 "bdev_delay_create", 00:05:35.729 "bdev_delay_update_latency", 00:05:35.729 "bdev_zone_block_delete", 00:05:35.729 "bdev_zone_block_create", 00:05:35.729 "blobfs_create", 00:05:35.729 "blobfs_detect", 00:05:35.729 "blobfs_set_cache_size", 00:05:35.729 "bdev_aio_delete", 00:05:35.729 "bdev_aio_rescan", 00:05:35.729 "bdev_aio_create", 00:05:35.729 "bdev_ftl_set_property", 00:05:35.729 "bdev_ftl_get_properties", 00:05:35.729 "bdev_ftl_get_stats", 00:05:35.729 "bdev_ftl_unmap", 00:05:35.729 "bdev_ftl_unload", 00:05:35.729 "bdev_ftl_delete", 00:05:35.729 "bdev_ftl_load", 00:05:35.729 "bdev_ftl_create", 00:05:35.729 "bdev_virtio_attach_controller", 00:05:35.729 "bdev_virtio_scsi_get_devices", 00:05:35.729 "bdev_virtio_detach_controller", 00:05:35.729 "bdev_virtio_blk_set_hotplug", 00:05:35.729 "bdev_iscsi_delete", 00:05:35.729 "bdev_iscsi_create", 00:05:35.729 "bdev_iscsi_set_options", 00:05:35.729 "accel_error_inject_error", 00:05:35.729 "ioat_scan_accel_module", 00:05:35.729 "dsa_scan_accel_module", 00:05:35.729 "iaa_scan_accel_module", 00:05:35.729 "keyring_file_remove_key", 00:05:35.729 "keyring_file_add_key", 00:05:35.729 "keyring_linux_set_options", 00:05:35.729 "fsdev_aio_delete", 00:05:35.729 "fsdev_aio_create", 00:05:35.729 "iscsi_get_histogram", 00:05:35.729 "iscsi_enable_histogram", 00:05:35.729 "iscsi_set_options", 00:05:35.729 "iscsi_get_auth_groups", 00:05:35.729 "iscsi_auth_group_remove_secret", 00:05:35.729 "iscsi_auth_group_add_secret", 00:05:35.729 "iscsi_delete_auth_group", 00:05:35.729 "iscsi_create_auth_group", 00:05:35.729 "iscsi_set_discovery_auth", 00:05:35.729 "iscsi_get_options", 00:05:35.729 "iscsi_target_node_request_logout", 00:05:35.729 "iscsi_target_node_set_redirect", 00:05:35.729 "iscsi_target_node_set_auth", 00:05:35.729 "iscsi_target_node_add_lun", 00:05:35.729 "iscsi_get_stats", 00:05:35.729 "iscsi_get_connections", 00:05:35.729 "iscsi_portal_group_set_auth", 00:05:35.729 "iscsi_start_portal_group", 00:05:35.729 "iscsi_delete_portal_group", 00:05:35.729 "iscsi_create_portal_group", 00:05:35.729 "iscsi_get_portal_groups", 00:05:35.729 "iscsi_delete_target_node", 00:05:35.729 "iscsi_target_node_remove_pg_ig_maps", 00:05:35.729 "iscsi_target_node_add_pg_ig_maps", 00:05:35.729 "iscsi_create_target_node", 00:05:35.729 "iscsi_get_target_nodes", 00:05:35.729 "iscsi_delete_initiator_group", 00:05:35.729 "iscsi_initiator_group_remove_initiators", 00:05:35.729 "iscsi_initiator_group_add_initiators", 00:05:35.729 "iscsi_create_initiator_group", 00:05:35.729 "iscsi_get_initiator_groups", 00:05:35.729 "nvmf_set_crdt", 00:05:35.729 "nvmf_set_config", 00:05:35.729 "nvmf_set_max_subsystems", 00:05:35.729 "nvmf_stop_mdns_prr", 00:05:35.729 "nvmf_publish_mdns_prr", 00:05:35.729 "nvmf_subsystem_get_listeners", 00:05:35.729 "nvmf_subsystem_get_qpairs", 00:05:35.729 "nvmf_subsystem_get_controllers", 00:05:35.729 "nvmf_get_stats", 00:05:35.729 "nvmf_get_transports", 00:05:35.729 "nvmf_create_transport", 00:05:35.729 "nvmf_get_targets", 00:05:35.729 "nvmf_delete_target", 00:05:35.729 "nvmf_create_target", 00:05:35.729 "nvmf_subsystem_allow_any_host", 00:05:35.729 "nvmf_subsystem_set_keys", 00:05:35.729 "nvmf_subsystem_remove_host", 00:05:35.729 "nvmf_subsystem_add_host", 00:05:35.729 "nvmf_ns_remove_host", 00:05:35.729 "nvmf_ns_add_host", 00:05:35.729 "nvmf_subsystem_remove_ns", 00:05:35.729 "nvmf_subsystem_set_ns_ana_group", 00:05:35.729 "nvmf_subsystem_add_ns", 00:05:35.729 "nvmf_subsystem_listener_set_ana_state", 00:05:35.729 "nvmf_discovery_get_referrals", 00:05:35.729 "nvmf_discovery_remove_referral", 00:05:35.729 "nvmf_discovery_add_referral", 00:05:35.729 "nvmf_subsystem_remove_listener", 00:05:35.729 "nvmf_subsystem_add_listener", 00:05:35.729 "nvmf_delete_subsystem", 00:05:35.729 "nvmf_create_subsystem", 00:05:35.729 "nvmf_get_subsystems", 00:05:35.729 "env_dpdk_get_mem_stats", 00:05:35.729 "nbd_get_disks", 00:05:35.729 "nbd_stop_disk", 00:05:35.729 "nbd_start_disk", 00:05:35.729 "ublk_recover_disk", 00:05:35.729 "ublk_get_disks", 00:05:35.729 "ublk_stop_disk", 00:05:35.729 "ublk_start_disk", 00:05:35.729 "ublk_destroy_target", 00:05:35.729 "ublk_create_target", 00:05:35.729 "virtio_blk_create_transport", 00:05:35.729 "virtio_blk_get_transports", 00:05:35.729 "vhost_controller_set_coalescing", 00:05:35.729 "vhost_get_controllers", 00:05:35.729 "vhost_delete_controller", 00:05:35.729 "vhost_create_blk_controller", 00:05:35.729 "vhost_scsi_controller_remove_target", 00:05:35.729 "vhost_scsi_controller_add_target", 00:05:35.729 "vhost_start_scsi_controller", 00:05:35.729 "vhost_create_scsi_controller", 00:05:35.729 "thread_set_cpumask", 00:05:35.729 "scheduler_set_options", 00:05:35.729 "framework_get_governor", 00:05:35.729 "framework_get_scheduler", 00:05:35.729 "framework_set_scheduler", 00:05:35.729 "framework_get_reactors", 00:05:35.729 "thread_get_io_channels", 00:05:35.729 "thread_get_pollers", 00:05:35.729 "thread_get_stats", 00:05:35.729 "framework_monitor_context_switch", 00:05:35.729 "spdk_kill_instance", 00:05:35.729 "log_enable_timestamps", 00:05:35.729 "log_get_flags", 00:05:35.729 "log_clear_flag", 00:05:35.729 "log_set_flag", 00:05:35.729 "log_get_level", 00:05:35.729 "log_set_level", 00:05:35.729 "log_get_print_level", 00:05:35.729 "log_set_print_level", 00:05:35.729 "framework_enable_cpumask_locks", 00:05:35.729 "framework_disable_cpumask_locks", 00:05:35.729 "framework_wait_init", 00:05:35.729 "framework_start_init", 00:05:35.729 "scsi_get_devices", 00:05:35.729 "bdev_get_histogram", 00:05:35.729 "bdev_enable_histogram", 00:05:35.729 "bdev_set_qos_limit", 00:05:35.729 "bdev_set_qd_sampling_period", 00:05:35.729 "bdev_get_bdevs", 00:05:35.729 "bdev_reset_iostat", 00:05:35.729 "bdev_get_iostat", 00:05:35.729 "bdev_examine", 00:05:35.729 "bdev_wait_for_examine", 00:05:35.729 "bdev_set_options", 00:05:35.729 "accel_get_stats", 00:05:35.729 "accel_set_options", 00:05:35.729 "accel_set_driver", 00:05:35.729 "accel_crypto_key_destroy", 00:05:35.729 "accel_crypto_keys_get", 00:05:35.729 "accel_crypto_key_create", 00:05:35.729 "accel_assign_opc", 00:05:35.729 "accel_get_module_info", 00:05:35.729 "accel_get_opc_assignments", 00:05:35.729 "vmd_rescan", 00:05:35.729 "vmd_remove_device", 00:05:35.729 "vmd_enable", 00:05:35.729 "sock_get_default_impl", 00:05:35.729 "sock_set_default_impl", 00:05:35.729 "sock_impl_set_options", 00:05:35.729 "sock_impl_get_options", 00:05:35.729 "iobuf_get_stats", 00:05:35.729 "iobuf_set_options", 00:05:35.729 "keyring_get_keys", 00:05:35.729 "framework_get_pci_devices", 00:05:35.729 "framework_get_config", 00:05:35.729 "framework_get_subsystems", 00:05:35.729 "fsdev_set_opts", 00:05:35.730 "fsdev_get_opts", 00:05:35.730 "trace_get_info", 00:05:35.730 "trace_get_tpoint_group_mask", 00:05:35.730 "trace_disable_tpoint_group", 00:05:35.730 "trace_enable_tpoint_group", 00:05:35.730 "trace_clear_tpoint_mask", 00:05:35.730 "trace_set_tpoint_mask", 00:05:35.730 "notify_get_notifications", 00:05:35.730 "notify_get_types", 00:05:35.730 "spdk_get_version", 00:05:35.730 "rpc_get_methods" 00:05:35.730 ] 00:05:35.730 10:38:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.730 10:38:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:35.730 10:38:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1481132 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1481132 ']' 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1481132 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1481132 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1481132' 00:05:35.730 killing process with pid 1481132 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1481132 00:05:35.730 10:38:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1481132 00:05:38.262 00:05:38.262 real 0m3.868s 00:05:38.262 user 0m6.892s 00:05:38.262 sys 0m0.705s 00:05:38.262 10:38:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.262 10:38:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.262 ************************************ 00:05:38.262 END TEST spdkcli_tcp 00:05:38.262 ************************************ 00:05:38.262 10:38:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.262 10:38:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.262 10:38:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.262 10:38:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.262 ************************************ 00:05:38.262 START TEST dpdk_mem_utility 00:05:38.262 ************************************ 00:05:38.262 10:38:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.262 * Looking for test storage... 00:05:38.262 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:38.262 10:38:16 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.262 10:38:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.262 10:38:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.262 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:38.262 10:38:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.263 10:38:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.263 --rc genhtml_branch_coverage=1 00:05:38.263 --rc genhtml_function_coverage=1 00:05:38.263 --rc genhtml_legend=1 00:05:38.263 --rc geninfo_all_blocks=1 00:05:38.263 --rc geninfo_unexecuted_blocks=1 00:05:38.263 00:05:38.263 ' 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.263 --rc genhtml_branch_coverage=1 00:05:38.263 --rc genhtml_function_coverage=1 00:05:38.263 --rc genhtml_legend=1 00:05:38.263 --rc geninfo_all_blocks=1 00:05:38.263 --rc geninfo_unexecuted_blocks=1 00:05:38.263 00:05:38.263 ' 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.263 --rc genhtml_branch_coverage=1 00:05:38.263 --rc genhtml_function_coverage=1 00:05:38.263 --rc genhtml_legend=1 00:05:38.263 --rc geninfo_all_blocks=1 00:05:38.263 --rc geninfo_unexecuted_blocks=1 00:05:38.263 00:05:38.263 ' 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.263 --rc genhtml_branch_coverage=1 00:05:38.263 --rc genhtml_function_coverage=1 00:05:38.263 --rc genhtml_legend=1 00:05:38.263 --rc geninfo_all_blocks=1 00:05:38.263 --rc geninfo_unexecuted_blocks=1 00:05:38.263 00:05:38.263 ' 00:05:38.263 10:38:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:38.263 10:38:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1481754 00:05:38.263 10:38:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1481754 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1481754 ']' 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.263 10:38:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.263 10:38:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.522 [2024-11-28 10:38:17.164039] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:38.522 [2024-11-28 10:38:17.164135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481754 ] 00:05:38.522 [2024-11-28 10:38:17.320525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.781 [2024-11-28 10:38:17.419407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.348 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.349 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:39.349 10:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:39.349 10:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:39.349 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.349 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.349 { 00:05:39.349 "filename": "/tmp/spdk_mem_dump.txt" 00:05:39.349 } 00:05:39.349 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.349 10:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.349 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:39.349 1 heaps totaling size 824.000000 MiB 00:05:39.349 size: 824.000000 MiB heap id: 0 00:05:39.349 end heaps---------- 00:05:39.349 9 mempools totaling size 603.782043 MiB 00:05:39.349 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:39.349 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:39.349 size: 100.555481 MiB name: bdev_io_1481754 00:05:39.349 size: 50.003479 MiB name: msgpool_1481754 00:05:39.349 size: 36.509338 MiB name: fsdev_io_1481754 00:05:39.349 size: 21.763794 MiB name: PDU_Pool 00:05:39.349 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:39.349 size: 4.133484 MiB name: evtpool_1481754 00:05:39.349 size: 0.026123 MiB name: Session_Pool 00:05:39.349 end mempools------- 00:05:39.349 6 memzones totaling size 4.142822 MiB 00:05:39.349 size: 1.000366 MiB name: RG_ring_0_1481754 00:05:39.349 size: 1.000366 MiB name: RG_ring_1_1481754 00:05:39.349 size: 1.000366 MiB name: RG_ring_4_1481754 00:05:39.349 size: 1.000366 MiB name: RG_ring_5_1481754 00:05:39.349 size: 0.125366 MiB name: RG_ring_2_1481754 00:05:39.349 size: 0.015991 MiB name: RG_ring_3_1481754 00:05:39.349 end memzones------- 00:05:39.349 10:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:39.609 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:39.609 list of free elements. size: 16.847595 MiB 00:05:39.609 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:39.609 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:39.609 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:39.609 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:39.609 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:39.609 element at address: 0x200019a00000 with size: 0.999329 MiB 00:05:39.609 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:39.609 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:39.609 element at address: 0x200019200000 with size: 0.959900 MiB 00:05:39.609 element at address: 0x200019d00040 with size: 0.937256 MiB 00:05:39.609 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:39.609 element at address: 0x20001b400000 with size: 0.583191 MiB 00:05:39.609 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:39.609 element at address: 0x200019600000 with size: 0.491150 MiB 00:05:39.609 element at address: 0x200019e00000 with size: 0.485657 MiB 00:05:39.609 element at address: 0x200012c00000 with size: 0.436157 MiB 00:05:39.609 element at address: 0x200028800000 with size: 0.411072 MiB 00:05:39.609 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:39.609 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:39.609 list of standard malloc elements. size: 199.221497 MiB 00:05:39.609 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:39.609 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:39.609 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:39.609 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:39.609 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:39.609 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:39.609 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:39.609 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:39.609 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:39.609 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:39.609 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:39.609 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:39.609 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:39.609 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:39.609 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:39.609 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:39.609 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:39.609 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:39.609 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:39.609 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:39.609 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:39.609 list of memzone associated elements. size: 607.930908 MiB 00:05:39.609 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:39.609 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:39.609 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:39.609 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:39.609 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:39.609 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1481754_0 00:05:39.609 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:39.609 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1481754_0 00:05:39.609 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:39.609 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1481754_0 00:05:39.609 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:39.609 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:39.609 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:39.609 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:39.609 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:39.609 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1481754_0 00:05:39.609 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:39.609 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1481754 00:05:39.609 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:39.609 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1481754 00:05:39.609 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:39.609 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:39.609 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:39.609 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:39.609 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:39.609 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:39.609 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:39.609 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:39.609 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:39.609 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1481754 00:05:39.609 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:39.609 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1481754 00:05:39.609 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:39.609 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1481754 00:05:39.609 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:39.609 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1481754 00:05:39.609 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:39.609 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1481754 00:05:39.609 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:39.609 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1481754 00:05:39.609 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:05:39.609 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:39.609 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:05:39.609 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:39.609 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:05:39.609 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:39.609 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:39.609 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1481754 00:05:39.609 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:39.609 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1481754 00:05:39.609 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:05:39.609 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:39.609 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:05:39.609 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:39.609 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:39.609 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1481754 00:05:39.609 element at address: 0x20002886f540 with size: 0.002502 MiB 00:05:39.609 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:39.609 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:39.609 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1481754 00:05:39.609 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:39.609 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1481754 00:05:39.609 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:39.609 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1481754 00:05:39.609 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:39.609 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:39.609 10:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:39.609 10:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1481754 00:05:39.609 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1481754 ']' 00:05:39.609 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1481754 00:05:39.609 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:39.609 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.609 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1481754 00:05:39.609 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.609 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.609 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1481754' 00:05:39.609 killing process with pid 1481754 00:05:39.610 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1481754 00:05:39.610 10:38:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1481754 00:05:42.142 00:05:42.142 real 0m3.655s 00:05:42.142 user 0m3.525s 00:05:42.142 sys 0m0.656s 00:05:42.142 10:38:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.142 10:38:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.142 ************************************ 00:05:42.142 END TEST dpdk_mem_utility 00:05:42.142 ************************************ 00:05:42.142 10:38:20 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:42.142 10:38:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.142 10:38:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.142 10:38:20 -- common/autotest_common.sh@10 -- # set +x 00:05:42.142 ************************************ 00:05:42.142 START TEST event 00:05:42.142 ************************************ 00:05:42.142 10:38:20 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:42.142 * Looking for test storage... 00:05:42.142 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:42.142 10:38:20 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.142 10:38:20 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.142 10:38:20 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.142 10:38:20 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.142 10:38:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.142 10:38:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.142 10:38:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.142 10:38:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.142 10:38:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.142 10:38:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.142 10:38:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.142 10:38:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.142 10:38:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.142 10:38:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.142 10:38:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.142 10:38:20 event -- scripts/common.sh@344 -- # case "$op" in 00:05:42.142 10:38:20 event -- scripts/common.sh@345 -- # : 1 00:05:42.142 10:38:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.142 10:38:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.142 10:38:20 event -- scripts/common.sh@365 -- # decimal 1 00:05:42.142 10:38:20 event -- scripts/common.sh@353 -- # local d=1 00:05:42.142 10:38:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.142 10:38:20 event -- scripts/common.sh@355 -- # echo 1 00:05:42.142 10:38:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.142 10:38:20 event -- scripts/common.sh@366 -- # decimal 2 00:05:42.142 10:38:20 event -- scripts/common.sh@353 -- # local d=2 00:05:42.142 10:38:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.142 10:38:20 event -- scripts/common.sh@355 -- # echo 2 00:05:42.142 10:38:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.142 10:38:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.142 10:38:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.142 10:38:20 event -- scripts/common.sh@368 -- # return 0 00:05:42.142 10:38:20 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.142 10:38:20 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.142 --rc genhtml_branch_coverage=1 00:05:42.142 --rc genhtml_function_coverage=1 00:05:42.142 --rc genhtml_legend=1 00:05:42.142 --rc geninfo_all_blocks=1 00:05:42.142 --rc geninfo_unexecuted_blocks=1 00:05:42.142 00:05:42.142 ' 00:05:42.142 10:38:20 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.142 --rc genhtml_branch_coverage=1 00:05:42.142 --rc genhtml_function_coverage=1 00:05:42.142 --rc genhtml_legend=1 00:05:42.142 --rc geninfo_all_blocks=1 00:05:42.142 --rc geninfo_unexecuted_blocks=1 00:05:42.142 00:05:42.142 ' 00:05:42.142 10:38:20 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.142 --rc genhtml_branch_coverage=1 00:05:42.142 --rc genhtml_function_coverage=1 00:05:42.142 --rc genhtml_legend=1 00:05:42.142 --rc geninfo_all_blocks=1 00:05:42.142 --rc geninfo_unexecuted_blocks=1 00:05:42.142 00:05:42.142 ' 00:05:42.142 10:38:20 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.142 --rc genhtml_branch_coverage=1 00:05:42.142 --rc genhtml_function_coverage=1 00:05:42.142 --rc genhtml_legend=1 00:05:42.142 --rc geninfo_all_blocks=1 00:05:42.142 --rc geninfo_unexecuted_blocks=1 00:05:42.142 00:05:42.142 ' 00:05:42.143 10:38:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:42.143 10:38:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.143 10:38:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.143 10:38:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:42.143 10:38:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.143 10:38:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.143 ************************************ 00:05:42.143 START TEST event_perf 00:05:42.143 ************************************ 00:05:42.143 10:38:20 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.143 Running I/O for 1 seconds...[2024-11-28 10:38:20.899046] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:42.143 [2024-11-28 10:38:20.899123] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482619 ] 00:05:42.401 [2024-11-28 10:38:21.049317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.401 [2024-11-28 10:38:21.149857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.401 [2024-11-28 10:38:21.149873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.401 [2024-11-28 10:38:21.149964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.401 [2024-11-28 10:38:21.149976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.779 Running I/O for 1 seconds... 00:05:43.779 lcore 0: 208973 00:05:43.779 lcore 1: 208970 00:05:43.779 lcore 2: 208971 00:05:43.779 lcore 3: 208973 00:05:43.779 done. 00:05:43.779 00:05:43.779 real 0m1.522s 00:05:43.779 user 0m4.335s 00:05:43.779 sys 0m0.181s 00:05:43.779 10:38:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.779 10:38:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.779 ************************************ 00:05:43.779 END TEST event_perf 00:05:43.779 ************************************ 00:05:43.779 10:38:22 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:43.779 10:38:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:43.779 10:38:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.779 10:38:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.779 ************************************ 00:05:43.779 START TEST event_reactor 00:05:43.779 ************************************ 00:05:43.779 10:38:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:43.779 [2024-11-28 10:38:22.500747] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:43.779 [2024-11-28 10:38:22.500836] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482911 ] 00:05:43.779 [2024-11-28 10:38:22.652055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.038 [2024-11-28 10:38:22.746476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.417 test_start 00:05:45.417 oneshot 00:05:45.417 tick 100 00:05:45.417 tick 100 00:05:45.417 tick 250 00:05:45.417 tick 100 00:05:45.417 tick 100 00:05:45.417 tick 250 00:05:45.417 tick 100 00:05:45.417 tick 500 00:05:45.417 tick 100 00:05:45.417 tick 100 00:05:45.417 tick 250 00:05:45.417 tick 100 00:05:45.417 tick 100 00:05:45.417 test_end 00:05:45.417 00:05:45.417 real 0m1.501s 00:05:45.417 user 0m1.335s 00:05:45.417 sys 0m0.160s 00:05:45.417 10:38:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.417 10:38:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:45.417 ************************************ 00:05:45.417 END TEST event_reactor 00:05:45.417 ************************************ 00:05:45.417 10:38:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.417 10:38:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:45.417 10:38:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.417 10:38:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.417 ************************************ 00:05:45.417 START TEST event_reactor_perf 00:05:45.417 ************************************ 00:05:45.417 10:38:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.417 [2024-11-28 10:38:24.078632] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:45.417 [2024-11-28 10:38:24.078713] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483198 ] 00:05:45.417 [2024-11-28 10:38:24.228642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.676 [2024-11-28 10:38:24.323130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.054 test_start 00:05:47.054 test_end 00:05:47.054 Performance: 402291 events per second 00:05:47.054 00:05:47.054 real 0m1.492s 00:05:47.054 user 0m1.319s 00:05:47.054 sys 0m0.167s 00:05:47.054 10:38:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.054 10:38:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.054 ************************************ 00:05:47.054 END TEST event_reactor_perf 00:05:47.054 ************************************ 00:05:47.054 10:38:25 event -- event/event.sh@49 -- # uname -s 00:05:47.054 10:38:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:47.054 10:38:25 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:47.054 10:38:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.054 10:38:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.054 10:38:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.054 ************************************ 00:05:47.054 START TEST event_scheduler 00:05:47.054 ************************************ 00:05:47.054 10:38:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:47.054 * Looking for test storage... 00:05:47.054 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:47.054 10:38:25 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.054 10:38:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.054 10:38:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.054 10:38:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.054 10:38:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.055 10:38:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.055 10:38:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.055 --rc genhtml_branch_coverage=1 00:05:47.055 --rc genhtml_function_coverage=1 00:05:47.055 --rc genhtml_legend=1 00:05:47.055 --rc geninfo_all_blocks=1 00:05:47.055 --rc geninfo_unexecuted_blocks=1 00:05:47.055 00:05:47.055 ' 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.055 --rc genhtml_branch_coverage=1 00:05:47.055 --rc genhtml_function_coverage=1 00:05:47.055 --rc genhtml_legend=1 00:05:47.055 --rc geninfo_all_blocks=1 00:05:47.055 --rc geninfo_unexecuted_blocks=1 00:05:47.055 00:05:47.055 ' 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.055 --rc genhtml_branch_coverage=1 00:05:47.055 --rc genhtml_function_coverage=1 00:05:47.055 --rc genhtml_legend=1 00:05:47.055 --rc geninfo_all_blocks=1 00:05:47.055 --rc geninfo_unexecuted_blocks=1 00:05:47.055 00:05:47.055 ' 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.055 --rc genhtml_branch_coverage=1 00:05:47.055 --rc genhtml_function_coverage=1 00:05:47.055 --rc genhtml_legend=1 00:05:47.055 --rc geninfo_all_blocks=1 00:05:47.055 --rc geninfo_unexecuted_blocks=1 00:05:47.055 00:05:47.055 ' 00:05:47.055 10:38:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:47.055 10:38:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1483525 00:05:47.055 10:38:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:47.055 10:38:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.055 10:38:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1483525 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1483525 ']' 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.055 10:38:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.055 [2024-11-28 10:38:25.885763] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:47.055 [2024-11-28 10:38:25.885851] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483525 ] 00:05:47.315 [2024-11-28 10:38:26.032510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.315 [2024-11-28 10:38:26.132425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.315 [2024-11-28 10:38:26.132489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.315 [2024-11-28 10:38:26.132500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.315 [2024-11-28 10:38:26.132512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.884 10:38:26 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.884 10:38:26 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:47.884 10:38:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:47.884 10:38:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.884 10:38:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.884 [2024-11-28 10:38:26.694935] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:47.884 [2024-11-28 10:38:26.694962] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:47.884 [2024-11-28 10:38:26.694980] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:47.884 [2024-11-28 10:38:26.694990] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:47.884 [2024-11-28 10:38:26.695001] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:47.884 10:38:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.884 10:38:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:47.884 10:38:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.884 10:38:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.143 [2024-11-28 10:38:26.974792] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:48.143 10:38:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.143 10:38:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:48.143 10:38:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.143 10:38:26 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.143 10:38:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.143 ************************************ 00:05:48.143 START TEST scheduler_create_thread 00:05:48.143 ************************************ 00:05:48.143 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:48.143 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:48.143 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.143 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 2 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 3 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 4 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 5 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 6 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 7 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 8 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 9 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 10 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.403 10:38:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.780 10:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.780 10:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:49.780 10:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:49.781 10:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.781 10:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.158 10:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.158 00:05:51.158 real 0m2.626s 00:05:51.158 user 0m0.016s 00:05:51.158 sys 0m0.006s 00:05:51.158 10:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.159 10:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.159 ************************************ 00:05:51.159 END TEST scheduler_create_thread 00:05:51.159 ************************************ 00:05:51.159 10:38:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:51.159 10:38:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1483525 00:05:51.159 10:38:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1483525 ']' 00:05:51.159 10:38:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1483525 00:05:51.159 10:38:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:51.159 10:38:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.159 10:38:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1483525 00:05:51.159 10:38:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:51.159 10:38:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:51.159 10:38:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1483525' 00:05:51.159 killing process with pid 1483525 00:05:51.159 10:38:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1483525 00:05:51.159 10:38:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1483525 00:05:51.418 [2024-11-28 10:38:30.123003] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:52.356 00:05:52.356 real 0m5.604s 00:05:52.356 user 0m9.753s 00:05:52.356 sys 0m0.563s 00:05:52.356 10:38:31 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.356 10:38:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.356 ************************************ 00:05:52.356 END TEST event_scheduler 00:05:52.356 ************************************ 00:05:52.613 10:38:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:52.613 10:38:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:52.613 10:38:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.613 10:38:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.613 10:38:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.613 ************************************ 00:05:52.613 START TEST app_repeat 00:05:52.613 ************************************ 00:05:52.613 10:38:31 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1484547 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1484547' 00:05:52.613 Process app_repeat pid: 1484547 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:52.613 spdk_app_start Round 0 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1484547 /var/tmp/spdk-nbd.sock 00:05:52.613 10:38:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1484547 ']' 00:05:52.613 10:38:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.613 10:38:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:52.613 10:38:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.613 10:38:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.613 10:38:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.613 10:38:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.613 [2024-11-28 10:38:31.370073] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:52.613 [2024-11-28 10:38:31.370166] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484547 ] 00:05:52.871 [2024-11-28 10:38:31.526099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.871 [2024-11-28 10:38:31.624253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.871 [2024-11-28 10:38:31.624265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.438 10:38:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.438 10:38:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.438 10:38:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.698 Malloc0 00:05:53.698 10:38:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.958 Malloc1 00:05:53.958 10:38:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.958 10:38:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.218 /dev/nbd0 00:05:54.218 10:38:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.218 10:38:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.218 1+0 records in 00:05:54.218 1+0 records out 00:05:54.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255285 s, 16.0 MB/s 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.218 10:38:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.218 10:38:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.218 10:38:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.218 10:38:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.478 /dev/nbd1 00:05:54.478 10:38:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.478 10:38:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.478 1+0 records in 00:05:54.478 1+0 records out 00:05:54.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278185 s, 14.7 MB/s 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.478 10:38:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.478 10:38:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.478 10:38:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.478 10:38:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.478 10:38:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.478 10:38:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.737 { 00:05:54.737 "nbd_device": "/dev/nbd0", 00:05:54.737 "bdev_name": "Malloc0" 00:05:54.737 }, 00:05:54.737 { 00:05:54.737 "nbd_device": "/dev/nbd1", 00:05:54.737 "bdev_name": "Malloc1" 00:05:54.737 } 00:05:54.737 ]' 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.737 { 00:05:54.737 "nbd_device": "/dev/nbd0", 00:05:54.737 "bdev_name": "Malloc0" 00:05:54.737 }, 00:05:54.737 { 00:05:54.737 "nbd_device": "/dev/nbd1", 00:05:54.737 "bdev_name": "Malloc1" 00:05:54.737 } 00:05:54.737 ]' 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.737 /dev/nbd1' 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.737 /dev/nbd1' 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.737 256+0 records in 00:05:54.737 256+0 records out 00:05:54.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010438 s, 100 MB/s 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.737 256+0 records in 00:05:54.737 256+0 records out 00:05:54.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154682 s, 67.8 MB/s 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.737 256+0 records in 00:05:54.737 256+0 records out 00:05:54.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176898 s, 59.3 MB/s 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.737 10:38:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.996 10:38:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.996 10:38:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.996 10:38:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.996 10:38:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.996 10:38:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.996 10:38:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.996 10:38:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.996 10:38:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.996 10:38:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.996 10:38:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.255 10:38:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.255 10:38:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.255 10:38:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.255 10:38:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.255 10:38:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.255 10:38:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.255 10:38:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.256 10:38:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.256 10:38:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.256 10:38:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.256 10:38:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.256 10:38:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.516 10:38:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.516 10:38:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.776 10:38:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.155 [2024-11-28 10:38:35.715098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.155 [2024-11-28 10:38:35.810677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.155 [2024-11-28 10:38:35.810677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.155 [2024-11-28 10:38:35.982808] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.155 [2024-11-28 10:38:35.982862] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.151 10:38:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.151 10:38:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.151 spdk_app_start Round 1 00:05:59.151 10:38:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1484547 /var/tmp/spdk-nbd.sock 00:05:59.151 10:38:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1484547 ']' 00:05:59.151 10:38:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.151 10:38:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.151 10:38:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.151 10:38:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.151 10:38:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.151 10:38:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.151 10:38:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.151 10:38:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.151 Malloc0 00:05:59.151 10:38:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.427 Malloc1 00:05:59.427 10:38:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.427 10:38:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.686 /dev/nbd0 00:05:59.686 10:38:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.686 10:38:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.686 1+0 records in 00:05:59.686 1+0 records out 00:05:59.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223629 s, 18.3 MB/s 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.686 10:38:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.686 10:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.686 10:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.686 10:38:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.946 /dev/nbd1 00:05:59.946 10:38:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.946 10:38:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.946 1+0 records in 00:05:59.946 1+0 records out 00:05:59.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258987 s, 15.8 MB/s 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.946 10:38:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.946 10:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.946 10:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.946 10:38:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.946 10:38:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.946 10:38:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.206 { 00:06:00.206 "nbd_device": "/dev/nbd0", 00:06:00.206 "bdev_name": "Malloc0" 00:06:00.206 }, 00:06:00.206 { 00:06:00.206 "nbd_device": "/dev/nbd1", 00:06:00.206 "bdev_name": "Malloc1" 00:06:00.206 } 00:06:00.206 ]' 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.206 { 00:06:00.206 "nbd_device": "/dev/nbd0", 00:06:00.206 "bdev_name": "Malloc0" 00:06:00.206 }, 00:06:00.206 { 00:06:00.206 "nbd_device": "/dev/nbd1", 00:06:00.206 "bdev_name": "Malloc1" 00:06:00.206 } 00:06:00.206 ]' 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.206 /dev/nbd1' 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.206 /dev/nbd1' 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.206 256+0 records in 00:06:00.206 256+0 records out 00:06:00.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107416 s, 97.6 MB/s 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.206 10:38:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.206 256+0 records in 00:06:00.206 256+0 records out 00:06:00.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149611 s, 70.1 MB/s 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.206 256+0 records in 00:06:00.206 256+0 records out 00:06:00.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210638 s, 49.8 MB/s 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.206 10:38:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.465 10:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.465 10:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.465 10:38:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.465 10:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.465 10:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.465 10:38:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.465 10:38:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.465 10:38:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.465 10:38:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.465 10:38:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.725 10:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.984 10:38:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.984 10:38:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.243 10:38:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.623 [2024-11-28 10:38:41.260813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.623 [2024-11-28 10:38:41.356030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.623 [2024-11-28 10:38:41.356038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.884 [2024-11-28 10:38:41.521238] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.884 [2024-11-28 10:38:41.521283] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.263 10:38:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.263 10:38:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.263 spdk_app_start Round 2 00:06:04.263 10:38:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1484547 /var/tmp/spdk-nbd.sock 00:06:04.263 10:38:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1484547 ']' 00:06:04.263 10:38:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.263 10:38:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.263 10:38:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.263 10:38:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.263 10:38:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.523 10:38:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.523 10:38:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.523 10:38:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.783 Malloc0 00:06:04.783 10:38:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.042 Malloc1 00:06:05.042 10:38:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.042 10:38:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.302 /dev/nbd0 00:06:05.302 10:38:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.302 10:38:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.302 10:38:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.302 10:38:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.302 10:38:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.302 10:38:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.302 10:38:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.302 10:38:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.302 10:38:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.302 10:38:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.302 10:38:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.302 1+0 records in 00:06:05.302 1+0 records out 00:06:05.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025533 s, 16.0 MB/s 00:06:05.302 10:38:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.302 10:38:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.302 10:38:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.302 10:38:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.302 10:38:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.302 10:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.302 10:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.302 10:38:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.562 /dev/nbd1 00:06:05.562 10:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.562 10:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.562 1+0 records in 00:06:05.562 1+0 records out 00:06:05.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263654 s, 15.5 MB/s 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.562 10:38:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.562 10:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.562 10:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.562 10:38:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.562 10:38:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.562 10:38:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.821 { 00:06:05.821 "nbd_device": "/dev/nbd0", 00:06:05.821 "bdev_name": "Malloc0" 00:06:05.821 }, 00:06:05.821 { 00:06:05.821 "nbd_device": "/dev/nbd1", 00:06:05.821 "bdev_name": "Malloc1" 00:06:05.821 } 00:06:05.821 ]' 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.821 { 00:06:05.821 "nbd_device": "/dev/nbd0", 00:06:05.821 "bdev_name": "Malloc0" 00:06:05.821 }, 00:06:05.821 { 00:06:05.821 "nbd_device": "/dev/nbd1", 00:06:05.821 "bdev_name": "Malloc1" 00:06:05.821 } 00:06:05.821 ]' 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.821 /dev/nbd1' 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.821 /dev/nbd1' 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.821 256+0 records in 00:06:05.821 256+0 records out 00:06:05.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113628 s, 92.3 MB/s 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.821 256+0 records in 00:06:05.821 256+0 records out 00:06:05.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157841 s, 66.4 MB/s 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.821 256+0 records in 00:06:05.821 256+0 records out 00:06:05.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178122 s, 58.9 MB/s 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.821 10:38:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.081 10:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.081 10:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.081 10:38:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.081 10:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.081 10:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.081 10:38:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.081 10:38:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.081 10:38:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.081 10:38:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.081 10:38:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.340 10:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.340 10:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.340 10:38:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.340 10:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.340 10:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.340 10:38:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.340 10:38:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.340 10:38:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.340 10:38:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.340 10:38:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.340 10:38:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.340 10:38:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.340 10:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.340 10:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.599 10:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.599 10:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.599 10:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.599 10:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.599 10:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.599 10:38:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.599 10:38:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.599 10:38:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.599 10:38:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.599 10:38:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.857 10:38:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.238 [2024-11-28 10:38:46.778073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.238 [2024-11-28 10:38:46.873335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.238 [2024-11-28 10:38:46.873335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.238 [2024-11-28 10:38:47.045171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.238 [2024-11-28 10:38:47.045225] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.142 10:38:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1484547 /var/tmp/spdk-nbd.sock 00:06:10.142 10:38:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1484547 ']' 00:06:10.142 10:38:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.142 10:38:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.142 10:38:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.143 10:38:48 event.app_repeat -- event/event.sh@39 -- # killprocess 1484547 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1484547 ']' 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1484547 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1484547 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1484547' 00:06:10.143 killing process with pid 1484547 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1484547 00:06:10.143 10:38:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1484547 00:06:11.080 spdk_app_start is called in Round 0. 00:06:11.080 Shutdown signal received, stop current app iteration 00:06:11.080 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:06:11.081 spdk_app_start is called in Round 1. 00:06:11.081 Shutdown signal received, stop current app iteration 00:06:11.081 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:06:11.081 spdk_app_start is called in Round 2. 00:06:11.081 Shutdown signal received, stop current app iteration 00:06:11.081 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:06:11.081 spdk_app_start is called in Round 3. 00:06:11.081 Shutdown signal received, stop current app iteration 00:06:11.081 10:38:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:11.081 10:38:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:11.081 00:06:11.081 real 0m18.548s 00:06:11.081 user 0m38.654s 00:06:11.081 sys 0m3.174s 00:06:11.081 10:38:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.081 10:38:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.081 ************************************ 00:06:11.081 END TEST app_repeat 00:06:11.081 ************************************ 00:06:11.081 10:38:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:11.081 10:38:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:11.081 10:38:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.081 10:38:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.081 10:38:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.081 ************************************ 00:06:11.081 START TEST cpu_locks 00:06:11.081 ************************************ 00:06:11.081 10:38:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:11.340 * Looking for test storage... 00:06:11.340 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.340 10:38:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.340 --rc genhtml_branch_coverage=1 00:06:11.340 --rc genhtml_function_coverage=1 00:06:11.340 --rc genhtml_legend=1 00:06:11.340 --rc geninfo_all_blocks=1 00:06:11.340 --rc geninfo_unexecuted_blocks=1 00:06:11.340 00:06:11.340 ' 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.340 --rc genhtml_branch_coverage=1 00:06:11.340 --rc genhtml_function_coverage=1 00:06:11.340 --rc genhtml_legend=1 00:06:11.340 --rc geninfo_all_blocks=1 00:06:11.340 --rc geninfo_unexecuted_blocks=1 00:06:11.340 00:06:11.340 ' 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.340 --rc genhtml_branch_coverage=1 00:06:11.340 --rc genhtml_function_coverage=1 00:06:11.340 --rc genhtml_legend=1 00:06:11.340 --rc geninfo_all_blocks=1 00:06:11.340 --rc geninfo_unexecuted_blocks=1 00:06:11.340 00:06:11.340 ' 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.340 --rc genhtml_branch_coverage=1 00:06:11.340 --rc genhtml_function_coverage=1 00:06:11.340 --rc genhtml_legend=1 00:06:11.340 --rc geninfo_all_blocks=1 00:06:11.340 --rc geninfo_unexecuted_blocks=1 00:06:11.340 00:06:11.340 ' 00:06:11.340 10:38:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:11.340 10:38:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:11.340 10:38:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:11.340 10:38:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.340 10:38:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.340 ************************************ 00:06:11.340 START TEST default_locks 00:06:11.340 ************************************ 00:06:11.340 10:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:11.340 10:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1488076 00:06:11.340 10:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.340 10:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1488076 00:06:11.341 10:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1488076 ']' 00:06:11.341 10:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.341 10:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.341 10:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.341 10:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.341 10:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.600 [2024-11-28 10:38:50.275035] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:11.600 [2024-11-28 10:38:50.275134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488076 ] 00:06:11.600 [2024-11-28 10:38:50.433982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.858 [2024-11-28 10:38:50.534630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.427 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.427 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:12.427 10:38:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1488076 00:06:12.427 10:38:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1488076 00:06:12.427 10:38:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.996 lslocks: write error 00:06:12.996 10:38:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1488076 00:06:12.996 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1488076 ']' 00:06:12.996 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1488076 00:06:12.996 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:12.996 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.996 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1488076 00:06:12.996 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.996 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.996 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1488076' 00:06:12.997 killing process with pid 1488076 00:06:12.997 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1488076 00:06:12.997 10:38:51 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1488076 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1488076 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1488076 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1488076 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1488076 ']' 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.534 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1488076) - No such process 00:06:15.534 ERROR: process (pid: 1488076) is no longer running 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.534 00:06:15.534 real 0m3.697s 00:06:15.534 user 0m3.617s 00:06:15.534 sys 0m0.742s 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.534 10:38:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.534 ************************************ 00:06:15.534 END TEST default_locks 00:06:15.534 ************************************ 00:06:15.534 10:38:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:15.534 10:38:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.534 10:38:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.534 10:38:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.534 ************************************ 00:06:15.534 START TEST default_locks_via_rpc 00:06:15.534 ************************************ 00:06:15.534 10:38:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:15.534 10:38:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1488659 00:06:15.534 10:38:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1488659 00:06:15.534 10:38:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.534 10:38:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1488659 ']' 00:06:15.534 10:38:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.534 10:38:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.534 10:38:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.534 10:38:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.534 10:38:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.534 [2024-11-28 10:38:54.047743] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:15.534 [2024-11-28 10:38:54.047836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488659 ] 00:06:15.534 [2024-11-28 10:38:54.202073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.534 [2024-11-28 10:38:54.296968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1488659 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1488659 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1488659 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1488659 ']' 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1488659 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.470 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1488659 00:06:16.729 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.729 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.729 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1488659' 00:06:16.729 killing process with pid 1488659 00:06:16.729 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1488659 00:06:16.729 10:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1488659 00:06:19.267 00:06:19.267 real 0m3.638s 00:06:19.267 user 0m3.565s 00:06:19.267 sys 0m0.708s 00:06:19.267 10:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.267 10:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.267 ************************************ 00:06:19.267 END TEST default_locks_via_rpc 00:06:19.267 ************************************ 00:06:19.267 10:38:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:19.267 10:38:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.267 10:38:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.267 10:38:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.267 ************************************ 00:06:19.267 START TEST non_locking_app_on_locked_coremask 00:06:19.267 ************************************ 00:06:19.267 10:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:19.267 10:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.267 10:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1489472 00:06:19.267 10:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1489472 /var/tmp/spdk.sock 00:06:19.267 10:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1489472 ']' 00:06:19.267 10:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.268 10:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.268 10:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.268 10:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.268 10:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.268 [2024-11-28 10:38:57.760837] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:19.268 [2024-11-28 10:38:57.760929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489472 ] 00:06:19.268 [2024-11-28 10:38:57.916898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.268 [2024-11-28 10:38:58.017028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.214 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.214 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.214 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1489508 00:06:20.214 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1489508 /var/tmp/spdk2.sock 00:06:20.215 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:20.215 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1489508 ']' 00:06:20.215 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.215 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.215 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.215 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.215 10:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.215 [2024-11-28 10:38:58.831100] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:20.215 [2024-11-28 10:38:58.831204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489508 ] 00:06:20.215 [2024-11-28 10:38:59.047883] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.215 [2024-11-28 10:38:59.047932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.474 [2024-11-28 10:38:59.245620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.009 10:39:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.009 10:39:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.009 10:39:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1489472 00:06:23.009 10:39:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.009 10:39:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1489472 00:06:23.269 lslocks: write error 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1489472 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1489472 ']' 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1489472 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489472 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489472' 00:06:23.269 killing process with pid 1489472 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1489472 00:06:23.269 10:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1489472 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1489508 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1489508 ']' 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1489508 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489508 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489508' 00:06:28.546 killing process with pid 1489508 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1489508 00:06:28.546 10:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1489508 00:06:29.958 00:06:29.958 real 0m11.071s 00:06:29.958 user 0m11.286s 00:06:29.958 sys 0m1.515s 00:06:29.958 10:39:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.958 10:39:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.958 ************************************ 00:06:29.958 END TEST non_locking_app_on_locked_coremask 00:06:29.958 ************************************ 00:06:29.959 10:39:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:29.959 10:39:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.959 10:39:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.959 10:39:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.959 ************************************ 00:06:29.959 START TEST locking_app_on_unlocked_coremask 00:06:29.959 ************************************ 00:06:29.959 10:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:29.959 10:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1491932 00:06:29.959 10:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1491932 /var/tmp/spdk.sock 00:06:29.959 10:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:29.959 10:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1491932 ']' 00:06:29.959 10:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.959 10:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.959 10:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.959 10:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.959 10:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.217 [2024-11-28 10:39:08.918468] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:30.218 [2024-11-28 10:39:08.918560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491932 ] 00:06:30.218 [2024-11-28 10:39:09.069507] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.218 [2024-11-28 10:39:09.069549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.477 [2024-11-28 10:39:09.164784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.097 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.098 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.098 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:31.098 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1492088 00:06:31.098 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1492088 /var/tmp/spdk2.sock 00:06:31.098 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1492088 ']' 00:06:31.098 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.098 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.098 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.098 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.098 10:39:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.098 [2024-11-28 10:39:09.956622] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:31.098 [2024-11-28 10:39:09.956719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492088 ] 00:06:31.357 [2024-11-28 10:39:10.177716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.616 [2024-11-28 10:39:10.378886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.152 10:39:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.152 10:39:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.152 10:39:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1492088 00:06:34.152 10:39:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.152 10:39:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1492088 00:06:34.412 lslocks: write error 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1491932 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1491932 ']' 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1491932 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1491932 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1491932' 00:06:34.412 killing process with pid 1491932 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1491932 00:06:34.412 10:39:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1491932 00:06:38.608 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1492088 00:06:38.608 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1492088 ']' 00:06:38.608 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1492088 00:06:38.608 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.608 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.608 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1492088 00:06:38.867 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.867 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.867 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1492088' 00:06:38.867 killing process with pid 1492088 00:06:38.867 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1492088 00:06:38.867 10:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1492088 00:06:41.405 00:06:41.405 real 0m10.848s 00:06:41.405 user 0m11.049s 00:06:41.405 sys 0m1.416s 00:06:41.405 10:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.405 10:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.405 ************************************ 00:06:41.405 END TEST locking_app_on_unlocked_coremask 00:06:41.405 ************************************ 00:06:41.405 10:39:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:41.405 10:39:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.405 10:39:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.405 10:39:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.405 ************************************ 00:06:41.405 START TEST locking_app_on_locked_coremask 00:06:41.406 ************************************ 00:06:41.406 10:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:41.406 10:39:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1493837 00:06:41.406 10:39:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1493837 /var/tmp/spdk.sock 00:06:41.406 10:39:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.406 10:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1493837 ']' 00:06:41.406 10:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.406 10:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.406 10:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.406 10:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.406 10:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.406 [2024-11-28 10:39:19.846741] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:41.406 [2024-11-28 10:39:19.846832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493837 ] 00:06:41.406 [2024-11-28 10:39:19.997133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.406 [2024-11-28 10:39:20.103149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1494090 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1494090 /var/tmp/spdk2.sock 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1494090 /var/tmp/spdk2.sock 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1494090 /var/tmp/spdk2.sock 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1494090 ']' 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.975 10:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.235 [2024-11-28 10:39:20.940040] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:42.235 [2024-11-28 10:39:20.940138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494090 ] 00:06:42.494 [2024-11-28 10:39:21.153817] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1493837 has claimed it. 00:06:42.494 [2024-11-28 10:39:21.153870] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.752 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1494090) - No such process 00:06:42.752 ERROR: process (pid: 1494090) is no longer running 00:06:42.752 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.752 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:42.752 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:42.752 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.752 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.752 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.752 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1493837 00:06:42.752 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1493837 00:06:42.752 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.321 lslocks: write error 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1493837 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1493837 ']' 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1493837 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1493837 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1493837' 00:06:43.321 killing process with pid 1493837 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1493837 00:06:43.321 10:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1493837 00:06:45.860 00:06:45.860 real 0m4.428s 00:06:45.860 user 0m4.517s 00:06:45.860 sys 0m0.949s 00:06:45.860 10:39:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.860 10:39:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.860 ************************************ 00:06:45.860 END TEST locking_app_on_locked_coremask 00:06:45.860 ************************************ 00:06:45.860 10:39:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:45.860 10:39:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.860 10:39:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.860 10:39:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.860 ************************************ 00:06:45.860 START TEST locking_overlapped_coremask 00:06:45.860 ************************************ 00:06:45.860 10:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:45.860 10:39:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:45.860 10:39:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1494672 00:06:45.860 10:39:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1494672 /var/tmp/spdk.sock 00:06:45.860 10:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1494672 ']' 00:06:45.860 10:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.860 10:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.860 10:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.860 10:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.860 10:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.860 [2024-11-28 10:39:24.339902] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:45.860 [2024-11-28 10:39:24.339996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494672 ] 00:06:45.860 [2024-11-28 10:39:24.487713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.860 [2024-11-28 10:39:24.585951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.860 [2024-11-28 10:39:24.586022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.860 [2024-11-28 10:39:24.586025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1494855 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1494855 /var/tmp/spdk2.sock 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1494855 /var/tmp/spdk2.sock 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1494855 /var/tmp/spdk2.sock 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1494855 ']' 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.799 10:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.799 [2024-11-28 10:39:25.433556] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:46.799 [2024-11-28 10:39:25.433653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494855 ] 00:06:46.799 [2024-11-28 10:39:25.657844] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1494672 has claimed it. 00:06:46.799 [2024-11-28 10:39:25.657906] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.368 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1494855) - No such process 00:06:47.368 ERROR: process (pid: 1494855) is no longer running 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1494672 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1494672 ']' 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1494672 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1494672 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1494672' 00:06:47.368 killing process with pid 1494672 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1494672 00:06:47.368 10:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1494672 00:06:49.906 00:06:49.906 real 0m4.117s 00:06:49.906 user 0m11.216s 00:06:49.906 sys 0m0.750s 00:06:49.906 10:39:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.907 ************************************ 00:06:49.907 END TEST locking_overlapped_coremask 00:06:49.907 ************************************ 00:06:49.907 10:39:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.907 10:39:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.907 10:39:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.907 10:39:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.907 ************************************ 00:06:49.907 START TEST locking_overlapped_coremask_via_rpc 00:06:49.907 ************************************ 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1495416 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1495416 /var/tmp/spdk.sock 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1495416 ']' 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.907 10:39:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.907 [2024-11-28 10:39:28.547706] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:49.907 [2024-11-28 10:39:28.547800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495416 ] 00:06:49.907 [2024-11-28 10:39:28.699618] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.907 [2024-11-28 10:39:28.699660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.167 [2024-11-28 10:39:28.800123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.167 [2024-11-28 10:39:28.800190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.167 [2024-11-28 10:39:28.800205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1495532 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1495532 /var/tmp/spdk2.sock 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1495532 ']' 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.738 10:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.998 [2024-11-28 10:39:29.658838] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:50.998 [2024-11-28 10:39:29.658934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495532 ] 00:06:50.998 [2024-11-28 10:39:29.879618] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.998 [2024-11-28 10:39:29.879668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.257 [2024-11-28 10:39:30.096782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.257 [2024-11-28 10:39:30.100668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.257 [2024-11-28 10:39:30.100697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.794 [2024-11-28 10:39:32.161737] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1495416 has claimed it. 00:06:53.794 request: 00:06:53.794 { 00:06:53.794 "method": "framework_enable_cpumask_locks", 00:06:53.794 "req_id": 1 00:06:53.794 } 00:06:53.794 Got JSON-RPC error response 00:06:53.794 response: 00:06:53.794 { 00:06:53.794 "code": -32603, 00:06:53.794 "message": "Failed to claim CPU core: 2" 00:06:53.794 } 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1495416 /var/tmp/spdk.sock 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1495416 ']' 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1495532 /var/tmp/spdk2.sock 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1495532 ']' 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.794 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.795 00:06:53.795 real 0m4.111s 00:06:53.795 user 0m1.076s 00:06:53.795 sys 0m0.231s 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.795 10:39:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.795 ************************************ 00:06:53.795 END TEST locking_overlapped_coremask_via_rpc 00:06:53.795 ************************************ 00:06:53.795 10:39:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.795 10:39:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1495416 ]] 00:06:53.795 10:39:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1495416 00:06:53.795 10:39:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1495416 ']' 00:06:53.795 10:39:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1495416 00:06:53.795 10:39:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:53.795 10:39:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.795 10:39:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1495416 00:06:53.795 10:39:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.795 10:39:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.795 10:39:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1495416' 00:06:53.795 killing process with pid 1495416 00:06:53.795 10:39:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1495416 00:06:53.795 10:39:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1495416 00:06:56.332 10:39:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1495532 ]] 00:06:56.332 10:39:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1495532 00:06:56.332 10:39:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1495532 ']' 00:06:56.332 10:39:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1495532 00:06:56.332 10:39:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:56.332 10:39:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.332 10:39:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1495532 00:06:56.332 10:39:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:56.332 10:39:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:56.332 10:39:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1495532' 00:06:56.332 killing process with pid 1495532 00:06:56.332 10:39:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1495532 00:06:56.332 10:39:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1495532 00:06:58.870 10:39:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.870 10:39:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.870 10:39:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1495416 ]] 00:06:58.870 10:39:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1495416 00:06:58.870 10:39:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1495416 ']' 00:06:58.870 10:39:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1495416 00:06:58.870 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1495416) - No such process 00:06:58.870 10:39:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1495416 is not found' 00:06:58.870 Process with pid 1495416 is not found 00:06:58.870 10:39:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1495532 ]] 00:06:58.870 10:39:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1495532 00:06:58.870 10:39:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1495532 ']' 00:06:58.870 10:39:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1495532 00:06:58.870 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1495532) - No such process 00:06:58.870 10:39:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1495532 is not found' 00:06:58.870 Process with pid 1495532 is not found 00:06:58.870 10:39:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.870 00:06:58.870 real 0m47.450s 00:06:58.870 user 1m21.086s 00:06:58.870 sys 0m7.756s 00:06:58.870 10:39:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.870 10:39:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.870 ************************************ 00:06:58.870 END TEST cpu_locks 00:06:58.870 ************************************ 00:06:58.870 00:06:58.870 real 1m16.815s 00:06:58.870 user 2m16.760s 00:06:58.870 sys 0m12.470s 00:06:58.870 10:39:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.870 10:39:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.870 ************************************ 00:06:58.870 END TEST event 00:06:58.870 ************************************ 00:06:58.870 10:39:37 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:58.870 10:39:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.870 10:39:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.870 10:39:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.870 ************************************ 00:06:58.870 START TEST thread 00:06:58.870 ************************************ 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:58.870 * Looking for test storage... 00:06:58.870 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.870 10:39:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.870 10:39:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.870 10:39:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.870 10:39:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.870 10:39:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.870 10:39:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.870 10:39:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.870 10:39:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.870 10:39:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.870 10:39:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.870 10:39:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.870 10:39:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:58.870 10:39:37 thread -- scripts/common.sh@345 -- # : 1 00:06:58.870 10:39:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.870 10:39:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.870 10:39:37 thread -- scripts/common.sh@365 -- # decimal 1 00:06:58.870 10:39:37 thread -- scripts/common.sh@353 -- # local d=1 00:06:58.870 10:39:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.870 10:39:37 thread -- scripts/common.sh@355 -- # echo 1 00:06:58.870 10:39:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.870 10:39:37 thread -- scripts/common.sh@366 -- # decimal 2 00:06:58.870 10:39:37 thread -- scripts/common.sh@353 -- # local d=2 00:06:58.870 10:39:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.870 10:39:37 thread -- scripts/common.sh@355 -- # echo 2 00:06:58.870 10:39:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.870 10:39:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.870 10:39:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.870 10:39:37 thread -- scripts/common.sh@368 -- # return 0 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.870 --rc genhtml_branch_coverage=1 00:06:58.870 --rc genhtml_function_coverage=1 00:06:58.870 --rc genhtml_legend=1 00:06:58.870 --rc geninfo_all_blocks=1 00:06:58.870 --rc geninfo_unexecuted_blocks=1 00:06:58.870 00:06:58.870 ' 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.870 --rc genhtml_branch_coverage=1 00:06:58.870 --rc genhtml_function_coverage=1 00:06:58.870 --rc genhtml_legend=1 00:06:58.870 --rc geninfo_all_blocks=1 00:06:58.870 --rc geninfo_unexecuted_blocks=1 00:06:58.870 00:06:58.870 ' 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.870 --rc genhtml_branch_coverage=1 00:06:58.870 --rc genhtml_function_coverage=1 00:06:58.870 --rc genhtml_legend=1 00:06:58.870 --rc geninfo_all_blocks=1 00:06:58.870 --rc geninfo_unexecuted_blocks=1 00:06:58.870 00:06:58.870 ' 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.870 --rc genhtml_branch_coverage=1 00:06:58.870 --rc genhtml_function_coverage=1 00:06:58.870 --rc genhtml_legend=1 00:06:58.870 --rc geninfo_all_blocks=1 00:06:58.870 --rc geninfo_unexecuted_blocks=1 00:06:58.870 00:06:58.870 ' 00:06:58.870 10:39:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.870 10:39:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.870 ************************************ 00:06:58.870 START TEST thread_poller_perf 00:06:58.870 ************************************ 00:06:58.870 10:39:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.130 [2024-11-28 10:39:37.790370] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:59.130 [2024-11-28 10:39:37.790452] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497110 ] 00:06:59.130 [2024-11-28 10:39:37.941804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.389 [2024-11-28 10:39:38.036468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.389 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.768 [2024-11-28T09:39:39.649Z] ====================================== 00:07:00.768 [2024-11-28T09:39:39.649Z] busy:2509663038 (cyc) 00:07:00.768 [2024-11-28T09:39:39.649Z] total_run_count: 422000 00:07:00.768 [2024-11-28T09:39:39.649Z] tsc_hz: 2500000000 (cyc) 00:07:00.768 [2024-11-28T09:39:39.649Z] ====================================== 00:07:00.768 [2024-11-28T09:39:39.649Z] poller_cost: 5947 (cyc), 2378 (nsec) 00:07:00.768 00:07:00.768 real 0m1.503s 00:07:00.768 user 0m1.349s 00:07:00.768 sys 0m0.149s 00:07:00.768 10:39:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.768 10:39:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.768 ************************************ 00:07:00.768 END TEST thread_poller_perf 00:07:00.768 ************************************ 00:07:00.768 10:39:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.768 10:39:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:00.768 10:39:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.768 10:39:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.768 ************************************ 00:07:00.768 START TEST thread_poller_perf 00:07:00.768 ************************************ 00:07:00.768 10:39:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.768 [2024-11-28 10:39:39.377179] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:00.768 [2024-11-28 10:39:39.377266] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497424 ] 00:07:00.768 [2024-11-28 10:39:39.533976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.768 [2024-11-28 10:39:39.626591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.768 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:02.146 [2024-11-28T09:39:41.027Z] ====================================== 00:07:02.146 [2024-11-28T09:39:41.027Z] busy:2502767088 (cyc) 00:07:02.146 [2024-11-28T09:39:41.027Z] total_run_count: 5160000 00:07:02.146 [2024-11-28T09:39:41.027Z] tsc_hz: 2500000000 (cyc) 00:07:02.146 [2024-11-28T09:39:41.027Z] ====================================== 00:07:02.146 [2024-11-28T09:39:41.027Z] poller_cost: 485 (cyc), 194 (nsec) 00:07:02.146 00:07:02.146 real 0m1.500s 00:07:02.146 user 0m1.334s 00:07:02.146 sys 0m0.161s 00:07:02.146 10:39:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.146 10:39:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.146 ************************************ 00:07:02.146 END TEST thread_poller_perf 00:07:02.146 ************************************ 00:07:02.146 10:39:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:02.146 00:07:02.146 real 0m3.361s 00:07:02.146 user 0m2.856s 00:07:02.146 sys 0m0.521s 00:07:02.146 10:39:40 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.146 10:39:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.146 ************************************ 00:07:02.146 END TEST thread 00:07:02.146 ************************************ 00:07:02.146 10:39:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:02.146 10:39:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:02.146 10:39:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.146 10:39:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.146 10:39:40 -- common/autotest_common.sh@10 -- # set +x 00:07:02.146 ************************************ 00:07:02.146 START TEST app_cmdline 00:07:02.146 ************************************ 00:07:02.146 10:39:40 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:02.406 * Looking for test storage... 00:07:02.406 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:02.406 10:39:41 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.406 10:39:41 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.406 10:39:41 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.406 10:39:41 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.406 10:39:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:02.406 10:39:41 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.406 10:39:41 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.406 --rc genhtml_branch_coverage=1 00:07:02.406 --rc genhtml_function_coverage=1 00:07:02.406 --rc genhtml_legend=1 00:07:02.406 --rc geninfo_all_blocks=1 00:07:02.406 --rc geninfo_unexecuted_blocks=1 00:07:02.406 00:07:02.406 ' 00:07:02.406 10:39:41 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.406 --rc genhtml_branch_coverage=1 00:07:02.406 --rc genhtml_function_coverage=1 00:07:02.406 --rc genhtml_legend=1 00:07:02.406 --rc geninfo_all_blocks=1 00:07:02.406 --rc geninfo_unexecuted_blocks=1 00:07:02.406 00:07:02.406 ' 00:07:02.406 10:39:41 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.406 --rc genhtml_branch_coverage=1 00:07:02.406 --rc genhtml_function_coverage=1 00:07:02.406 --rc genhtml_legend=1 00:07:02.406 --rc geninfo_all_blocks=1 00:07:02.406 --rc geninfo_unexecuted_blocks=1 00:07:02.406 00:07:02.406 ' 00:07:02.406 10:39:41 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.406 --rc genhtml_branch_coverage=1 00:07:02.406 --rc genhtml_function_coverage=1 00:07:02.406 --rc genhtml_legend=1 00:07:02.407 --rc geninfo_all_blocks=1 00:07:02.407 --rc geninfo_unexecuted_blocks=1 00:07:02.407 00:07:02.407 ' 00:07:02.407 10:39:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:02.407 10:39:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1497847 00:07:02.407 10:39:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1497847 00:07:02.407 10:39:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:02.407 10:39:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1497847 ']' 00:07:02.407 10:39:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.407 10:39:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.407 10:39:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.407 10:39:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.407 10:39:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.407 [2024-11-28 10:39:41.233958] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:02.407 [2024-11-28 10:39:41.234051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497847 ] 00:07:02.667 [2024-11-28 10:39:41.383231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.667 [2024-11-28 10:39:41.482465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.728 10:39:42 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.728 10:39:42 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:03.728 10:39:42 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:03.728 { 00:07:03.728 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:07:03.728 "fields": { 00:07:03.728 "major": 25, 00:07:03.728 "minor": 1, 00:07:03.728 "patch": 0, 00:07:03.728 "suffix": "-pre", 00:07:03.728 "commit": "35cd3e84d" 00:07:03.728 } 00:07:03.728 } 00:07:03.729 10:39:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:03.729 10:39:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:03.729 10:39:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:03.729 10:39:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:03.729 10:39:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:03.729 10:39:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:03.729 10:39:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.729 10:39:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:03.729 10:39:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:03.729 10:39:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:03.729 10:39:42 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.988 request: 00:07:03.988 { 00:07:03.988 "method": "env_dpdk_get_mem_stats", 00:07:03.988 "req_id": 1 00:07:03.988 } 00:07:03.988 Got JSON-RPC error response 00:07:03.988 response: 00:07:03.988 { 00:07:03.988 "code": -32601, 00:07:03.988 "message": "Method not found" 00:07:03.988 } 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.988 10:39:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1497847 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1497847 ']' 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1497847 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1497847 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1497847' 00:07:03.988 killing process with pid 1497847 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 1497847 00:07:03.988 10:39:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 1497847 00:07:06.585 00:07:06.585 real 0m3.898s 00:07:06.585 user 0m4.045s 00:07:06.585 sys 0m0.665s 00:07:06.585 10:39:44 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.585 10:39:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.585 ************************************ 00:07:06.585 END TEST app_cmdline 00:07:06.585 ************************************ 00:07:06.585 10:39:44 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:06.585 10:39:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.585 10:39:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.585 10:39:44 -- common/autotest_common.sh@10 -- # set +x 00:07:06.585 ************************************ 00:07:06.585 START TEST version 00:07:06.585 ************************************ 00:07:06.585 10:39:44 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:06.585 * Looking for test storage... 00:07:06.585 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:06.585 10:39:45 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.585 10:39:45 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.585 10:39:45 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.585 10:39:45 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.585 10:39:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.585 10:39:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.585 10:39:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.585 10:39:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.585 10:39:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.585 10:39:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.585 10:39:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.585 10:39:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.585 10:39:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.585 10:39:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.585 10:39:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.585 10:39:45 version -- scripts/common.sh@344 -- # case "$op" in 00:07:06.585 10:39:45 version -- scripts/common.sh@345 -- # : 1 00:07:06.585 10:39:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.585 10:39:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.585 10:39:45 version -- scripts/common.sh@365 -- # decimal 1 00:07:06.585 10:39:45 version -- scripts/common.sh@353 -- # local d=1 00:07:06.585 10:39:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.585 10:39:45 version -- scripts/common.sh@355 -- # echo 1 00:07:06.585 10:39:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.585 10:39:45 version -- scripts/common.sh@366 -- # decimal 2 00:07:06.585 10:39:45 version -- scripts/common.sh@353 -- # local d=2 00:07:06.585 10:39:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.585 10:39:45 version -- scripts/common.sh@355 -- # echo 2 00:07:06.585 10:39:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.585 10:39:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.585 10:39:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.585 10:39:45 version -- scripts/common.sh@368 -- # return 0 00:07:06.585 10:39:45 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.585 10:39:45 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.585 --rc genhtml_branch_coverage=1 00:07:06.585 --rc genhtml_function_coverage=1 00:07:06.585 --rc genhtml_legend=1 00:07:06.585 --rc geninfo_all_blocks=1 00:07:06.585 --rc geninfo_unexecuted_blocks=1 00:07:06.585 00:07:06.585 ' 00:07:06.585 10:39:45 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.585 --rc genhtml_branch_coverage=1 00:07:06.585 --rc genhtml_function_coverage=1 00:07:06.585 --rc genhtml_legend=1 00:07:06.585 --rc geninfo_all_blocks=1 00:07:06.585 --rc geninfo_unexecuted_blocks=1 00:07:06.585 00:07:06.585 ' 00:07:06.585 10:39:45 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.585 --rc genhtml_branch_coverage=1 00:07:06.585 --rc genhtml_function_coverage=1 00:07:06.585 --rc genhtml_legend=1 00:07:06.585 --rc geninfo_all_blocks=1 00:07:06.585 --rc geninfo_unexecuted_blocks=1 00:07:06.585 00:07:06.585 ' 00:07:06.585 10:39:45 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.585 --rc genhtml_branch_coverage=1 00:07:06.585 --rc genhtml_function_coverage=1 00:07:06.585 --rc genhtml_legend=1 00:07:06.585 --rc geninfo_all_blocks=1 00:07:06.585 --rc geninfo_unexecuted_blocks=1 00:07:06.585 00:07:06.585 ' 00:07:06.585 10:39:45 version -- app/version.sh@17 -- # get_header_version major 00:07:06.585 10:39:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:06.585 10:39:45 version -- app/version.sh@14 -- # cut -f2 00:07:06.585 10:39:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.585 10:39:45 version -- app/version.sh@17 -- # major=25 00:07:06.585 10:39:45 version -- app/version.sh@18 -- # get_header_version minor 00:07:06.585 10:39:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:06.585 10:39:45 version -- app/version.sh@14 -- # cut -f2 00:07:06.585 10:39:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.585 10:39:45 version -- app/version.sh@18 -- # minor=1 00:07:06.585 10:39:45 version -- app/version.sh@19 -- # get_header_version patch 00:07:06.585 10:39:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:06.585 10:39:45 version -- app/version.sh@14 -- # cut -f2 00:07:06.585 10:39:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.585 10:39:45 version -- app/version.sh@19 -- # patch=0 00:07:06.585 10:39:45 version -- app/version.sh@20 -- # get_header_version suffix 00:07:06.585 10:39:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:06.585 10:39:45 version -- app/version.sh@14 -- # cut -f2 00:07:06.585 10:39:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.585 10:39:45 version -- app/version.sh@20 -- # suffix=-pre 00:07:06.585 10:39:45 version -- app/version.sh@22 -- # version=25.1 00:07:06.585 10:39:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:06.585 10:39:45 version -- app/version.sh@28 -- # version=25.1rc0 00:07:06.586 10:39:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:06.586 10:39:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:06.586 10:39:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:06.586 10:39:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:06.586 00:07:06.586 real 0m0.239s 00:07:06.586 user 0m0.146s 00:07:06.586 sys 0m0.139s 00:07:06.586 10:39:45 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.586 10:39:45 version -- common/autotest_common.sh@10 -- # set +x 00:07:06.586 ************************************ 00:07:06.586 END TEST version 00:07:06.586 ************************************ 00:07:06.586 10:39:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:06.586 10:39:45 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:06.586 10:39:45 -- spdk/autotest.sh@194 -- # uname -s 00:07:06.586 10:39:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:06.586 10:39:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:06.586 10:39:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:06.586 10:39:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:06.586 10:39:45 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:06.586 10:39:45 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:06.586 10:39:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.586 10:39:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.586 10:39:45 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:06.586 10:39:45 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:06.586 10:39:45 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:06.586 10:39:45 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:06.586 10:39:45 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:07:06.586 10:39:45 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:06.586 10:39:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.586 10:39:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.586 10:39:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.586 ************************************ 00:07:06.586 START TEST nvmf_rdma 00:07:06.586 ************************************ 00:07:06.586 10:39:45 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:06.586 * Looking for test storage... 00:07:06.586 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:06.586 10:39:45 nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.586 10:39:45 nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.586 10:39:45 nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.846 10:39:45 nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.846 10:39:45 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:07:06.846 10:39:45 nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.846 10:39:45 nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.846 --rc genhtml_branch_coverage=1 00:07:06.846 --rc genhtml_function_coverage=1 00:07:06.846 --rc genhtml_legend=1 00:07:06.846 --rc geninfo_all_blocks=1 00:07:06.846 --rc geninfo_unexecuted_blocks=1 00:07:06.846 00:07:06.846 ' 00:07:06.846 10:39:45 nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.846 --rc genhtml_branch_coverage=1 00:07:06.846 --rc genhtml_function_coverage=1 00:07:06.846 --rc genhtml_legend=1 00:07:06.846 --rc geninfo_all_blocks=1 00:07:06.846 --rc geninfo_unexecuted_blocks=1 00:07:06.846 00:07:06.846 ' 00:07:06.846 10:39:45 nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.846 --rc genhtml_branch_coverage=1 00:07:06.846 --rc genhtml_function_coverage=1 00:07:06.846 --rc genhtml_legend=1 00:07:06.846 --rc geninfo_all_blocks=1 00:07:06.846 --rc geninfo_unexecuted_blocks=1 00:07:06.846 00:07:06.846 ' 00:07:06.846 10:39:45 nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.846 --rc genhtml_branch_coverage=1 00:07:06.846 --rc genhtml_function_coverage=1 00:07:06.846 --rc genhtml_legend=1 00:07:06.846 --rc geninfo_all_blocks=1 00:07:06.846 --rc geninfo_unexecuted_blocks=1 00:07:06.846 00:07:06.846 ' 00:07:06.846 10:39:45 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:06.846 10:39:45 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:06.846 10:39:45 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:06.846 10:39:45 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.846 10:39:45 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.846 10:39:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:06.846 ************************************ 00:07:06.846 START TEST nvmf_target_core 00:07:06.846 ************************************ 00:07:06.846 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:06.846 * Looking for test storage... 00:07:06.846 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:06.846 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.846 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.846 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.107 --rc genhtml_branch_coverage=1 00:07:07.107 --rc genhtml_function_coverage=1 00:07:07.107 --rc genhtml_legend=1 00:07:07.107 --rc geninfo_all_blocks=1 00:07:07.107 --rc geninfo_unexecuted_blocks=1 00:07:07.107 00:07:07.107 ' 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.107 --rc genhtml_branch_coverage=1 00:07:07.107 --rc genhtml_function_coverage=1 00:07:07.107 --rc genhtml_legend=1 00:07:07.107 --rc geninfo_all_blocks=1 00:07:07.107 --rc geninfo_unexecuted_blocks=1 00:07:07.107 00:07:07.107 ' 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.107 --rc genhtml_branch_coverage=1 00:07:07.107 --rc genhtml_function_coverage=1 00:07:07.107 --rc genhtml_legend=1 00:07:07.107 --rc geninfo_all_blocks=1 00:07:07.107 --rc geninfo_unexecuted_blocks=1 00:07:07.107 00:07:07.107 ' 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.107 --rc genhtml_branch_coverage=1 00:07:07.107 --rc genhtml_function_coverage=1 00:07:07.107 --rc genhtml_legend=1 00:07:07.107 --rc geninfo_all_blocks=1 00:07:07.107 --rc geninfo_unexecuted_blocks=1 00:07:07.107 00:07:07.107 ' 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.107 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.108 ************************************ 00:07:07.108 START TEST nvmf_abort 00:07:07.108 ************************************ 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:07.108 * Looking for test storage... 00:07:07.108 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.108 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.368 --rc genhtml_branch_coverage=1 00:07:07.368 --rc genhtml_function_coverage=1 00:07:07.368 --rc genhtml_legend=1 00:07:07.368 --rc geninfo_all_blocks=1 00:07:07.368 --rc geninfo_unexecuted_blocks=1 00:07:07.368 00:07:07.368 ' 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.368 --rc genhtml_branch_coverage=1 00:07:07.368 --rc genhtml_function_coverage=1 00:07:07.368 --rc genhtml_legend=1 00:07:07.368 --rc geninfo_all_blocks=1 00:07:07.368 --rc geninfo_unexecuted_blocks=1 00:07:07.368 00:07:07.368 ' 00:07:07.368 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.368 --rc genhtml_branch_coverage=1 00:07:07.368 --rc genhtml_function_coverage=1 00:07:07.368 --rc genhtml_legend=1 00:07:07.368 --rc geninfo_all_blocks=1 00:07:07.368 --rc geninfo_unexecuted_blocks=1 00:07:07.368 00:07:07.368 ' 00:07:07.369 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.369 --rc genhtml_branch_coverage=1 00:07:07.369 --rc genhtml_function_coverage=1 00:07:07.369 --rc genhtml_legend=1 00:07:07.369 --rc geninfo_all_blocks=1 00:07:07.369 --rc geninfo_unexecuted_blocks=1 00:07:07.369 00:07:07.369 ' 00:07:07.369 10:39:45 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.369 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.369 10:39:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:15.500 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:15.501 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:15.501 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:15.501 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:15.501 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:15.501 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:15.501 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:15.501 altname enp217s0f0np0 00:07:15.501 altname ens818f0np0 00:07:15.501 inet 192.168.100.8/24 scope global mlx_0_0 00:07:15.501 valid_lft forever preferred_lft forever 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:15.501 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:15.501 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:15.501 altname enp217s0f1np1 00:07:15.501 altname ens818f1np1 00:07:15.501 inet 192.168.100.9/24 scope global mlx_0_1 00:07:15.501 valid_lft forever preferred_lft forever 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:15.501 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:15.502 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:15.762 192.168.100.9' 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:15.762 192.168.100.9' 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:15.762 192.168.100.9' 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1502756 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1502756 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1502756 ']' 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.762 10:39:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.762 [2024-11-28 10:39:54.518383] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:15.762 [2024-11-28 10:39:54.518479] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.022 [2024-11-28 10:39:54.673176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.022 [2024-11-28 10:39:54.773259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.022 [2024-11-28 10:39:54.773312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.022 [2024-11-28 10:39:54.773325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.022 [2024-11-28 10:39:54.773338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.022 [2024-11-28 10:39:54.773349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.022 [2024-11-28 10:39:54.775812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.022 [2024-11-28 10:39:54.775871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.022 [2024-11-28 10:39:54.775879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.594 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.594 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:16.594 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:16.594 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.594 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.594 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.594 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:07:16.594 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.594 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.594 [2024-11-28 10:39:55.394545] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f4836716940) succeed. 00:07:16.594 [2024-11-28 10:39:55.409858] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f4835dbd940) succeed. 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.853 Malloc0 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.853 Delay0 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.853 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.854 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.854 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:16.854 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.854 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.113 [2024-11-28 10:39:55.735740] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:17.113 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.113 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:17.113 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.113 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.113 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.113 10:39:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:17.113 [2024-11-28 10:39:55.880735] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:19.650 Initializing NVMe Controllers 00:07:19.650 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:19.650 controller IO queue size 128 less than required 00:07:19.650 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:19.650 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:19.650 Initialization complete. Launching workers. 00:07:19.650 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38003 00:07:19.650 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38064, failed to submit 62 00:07:19.650 success 38006, unsuccessful 58, failed 0 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:19.650 rmmod nvme_rdma 00:07:19.650 rmmod nvme_fabrics 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1502756 ']' 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1502756 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1502756 ']' 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1502756 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1502756 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1502756' 00:07:19.650 killing process with pid 1502756 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1502756 00:07:19.650 10:39:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1502756 00:07:21.029 10:39:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:21.029 10:39:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:21.029 00:07:21.029 real 0m14.068s 00:07:21.029 user 0m19.001s 00:07:21.029 sys 0m7.108s 00:07:21.029 10:39:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.029 10:39:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.029 ************************************ 00:07:21.029 END TEST nvmf_abort 00:07:21.029 ************************************ 00:07:21.288 10:39:59 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:21.288 10:39:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.288 10:39:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.288 10:39:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.288 ************************************ 00:07:21.288 START TEST nvmf_ns_hotplug_stress 00:07:21.288 ************************************ 00:07:21.288 10:39:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:21.288 * Looking for test storage... 00:07:21.288 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.288 --rc genhtml_branch_coverage=1 00:07:21.288 --rc genhtml_function_coverage=1 00:07:21.288 --rc genhtml_legend=1 00:07:21.288 --rc geninfo_all_blocks=1 00:07:21.288 --rc geninfo_unexecuted_blocks=1 00:07:21.288 00:07:21.288 ' 00:07:21.288 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.289 --rc genhtml_branch_coverage=1 00:07:21.289 --rc genhtml_function_coverage=1 00:07:21.289 --rc genhtml_legend=1 00:07:21.289 --rc geninfo_all_blocks=1 00:07:21.289 --rc geninfo_unexecuted_blocks=1 00:07:21.289 00:07:21.289 ' 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.289 --rc genhtml_branch_coverage=1 00:07:21.289 --rc genhtml_function_coverage=1 00:07:21.289 --rc genhtml_legend=1 00:07:21.289 --rc geninfo_all_blocks=1 00:07:21.289 --rc geninfo_unexecuted_blocks=1 00:07:21.289 00:07:21.289 ' 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.289 --rc genhtml_branch_coverage=1 00:07:21.289 --rc genhtml_function_coverage=1 00:07:21.289 --rc genhtml_legend=1 00:07:21.289 --rc geninfo_all_blocks=1 00:07:21.289 --rc geninfo_unexecuted_blocks=1 00:07:21.289 00:07:21.289 ' 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.289 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.549 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.549 10:40:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:29.676 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:29.676 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:29.677 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:29.677 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:29.677 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:29.677 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:29.677 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:29.677 altname enp217s0f0np0 00:07:29.677 altname ens818f0np0 00:07:29.677 inet 192.168.100.8/24 scope global mlx_0_0 00:07:29.677 valid_lft forever preferred_lft forever 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:29.677 10:40:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:29.677 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:29.677 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:29.677 altname enp217s0f1np1 00:07:29.677 altname ens818f1np1 00:07:29.677 inet 192.168.100.9/24 scope global mlx_0_1 00:07:29.677 valid_lft forever preferred_lft forever 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:29.677 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:29.678 192.168.100.9' 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:29.678 192.168.100.9' 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:29.678 192.168.100.9' 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1507752 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1507752 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1507752 ']' 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.678 10:40:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:29.678 [2024-11-28 10:40:08.210341] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:29.678 [2024-11-28 10:40:08.210447] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.678 [2024-11-28 10:40:08.362872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.678 [2024-11-28 10:40:08.463985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.678 [2024-11-28 10:40:08.464035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.678 [2024-11-28 10:40:08.464048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.678 [2024-11-28 10:40:08.464061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.678 [2024-11-28 10:40:08.464070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.678 [2024-11-28 10:40:08.466409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.678 [2024-11-28 10:40:08.466468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.678 [2024-11-28 10:40:08.466476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.247 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.247 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:30.247 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.247 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.247 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:30.247 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.247 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:30.247 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:30.507 [2024-11-28 10:40:09.247681] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f0a9bd3e940) succeed. 00:07:30.507 [2024-11-28 10:40:09.257110] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f0a9bbbd940) succeed. 00:07:30.767 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:31.026 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:31.026 [2024-11-28 10:40:09.837550] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:31.026 10:40:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:31.286 10:40:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:31.546 Malloc0 00:07:31.546 10:40:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:31.805 Delay0 00:07:31.805 10:40:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.065 10:40:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:32.065 NULL1 00:07:32.065 10:40:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:32.324 10:40:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:32.324 10:40:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1508310 00:07:32.324 10:40:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:32.324 10:40:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.584 10:40:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.584 10:40:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:32.584 10:40:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:32.843 true 00:07:32.843 10:40:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:32.843 10:40:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.102 10:40:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.361 10:40:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:33.361 10:40:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:33.620 true 00:07:33.620 10:40:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:33.620 10:40:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.620 10:40:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.879 10:40:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:33.879 10:40:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:34.138 true 00:07:34.138 10:40:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:34.138 10:40:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.397 10:40:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.397 10:40:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:34.397 10:40:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:34.656 true 00:07:34.656 10:40:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:34.656 10:40:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.915 10:40:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.174 10:40:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:35.174 10:40:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:35.174 true 00:07:35.174 10:40:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:35.174 10:40:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.431 10:40:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.689 10:40:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:35.689 10:40:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:35.946 true 00:07:35.946 10:40:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:35.946 10:40:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.946 10:40:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.204 10:40:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:36.204 10:40:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:36.462 true 00:07:36.462 10:40:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:36.462 10:40:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.721 10:40:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.721 10:40:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:36.721 10:40:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:36.980 true 00:07:36.980 10:40:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:36.980 10:40:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.238 10:40:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.496 10:40:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:37.496 10:40:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:37.496 true 00:07:37.496 10:40:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:37.496 10:40:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.754 10:40:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.013 10:40:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:38.013 10:40:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:38.271 true 00:07:38.271 10:40:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:38.271 10:40:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.530 10:40:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.530 10:40:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:38.530 10:40:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:38.788 true 00:07:38.788 10:40:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:38.788 10:40:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.048 10:40:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.307 10:40:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:39.307 10:40:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:39.307 true 00:07:39.307 10:40:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:39.307 10:40:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.566 10:40:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.824 10:40:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:39.824 10:40:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:40.084 true 00:07:40.084 10:40:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:40.084 10:40:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.084 10:40:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.343 10:40:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:40.343 10:40:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:40.602 true 00:07:40.602 10:40:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:40.602 10:40:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.861 10:40:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.861 10:40:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:40.861 10:40:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:41.121 true 00:07:41.121 10:40:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:41.121 10:40:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.380 10:40:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.640 10:40:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:41.640 10:40:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:41.640 true 00:07:41.640 10:40:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:41.640 10:40:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.918 10:40:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.184 10:40:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:42.184 10:40:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:42.444 true 00:07:42.444 10:40:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:42.444 10:40:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.444 10:40:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.703 10:40:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:42.703 10:40:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:42.962 true 00:07:42.962 10:40:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:42.962 10:40:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.222 10:40:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.222 10:40:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:43.222 10:40:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:43.481 true 00:07:43.481 10:40:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:43.481 10:40:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.740 10:40:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.000 10:40:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:44.000 10:40:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:44.000 true 00:07:44.259 10:40:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:44.259 10:40:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.259 10:40:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.518 10:40:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:44.518 10:40:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:44.778 true 00:07:44.778 10:40:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:44.778 10:40:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.038 10:40:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.038 10:40:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:45.038 10:40:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:45.297 true 00:07:45.297 10:40:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:45.297 10:40:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.557 10:40:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.817 10:40:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:45.817 10:40:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:45.817 true 00:07:45.817 10:40:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:45.817 10:40:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.076 10:40:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.335 10:40:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:46.335 10:40:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:46.595 true 00:07:46.595 10:40:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:46.595 10:40:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.855 10:40:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.855 10:40:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:46.855 10:40:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:47.115 true 00:07:47.115 10:40:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:47.115 10:40:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.375 10:40:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.634 10:40:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:47.634 10:40:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:47.634 true 00:07:47.634 10:40:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:47.634 10:40:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.894 10:40:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.153 10:40:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:48.153 10:40:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:48.412 true 00:07:48.412 10:40:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:48.412 10:40:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.412 10:40:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.671 10:40:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:48.671 10:40:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:48.931 true 00:07:48.931 10:40:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:48.931 10:40:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.189 10:40:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.190 10:40:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:49.190 10:40:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:49.449 true 00:07:49.449 10:40:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:49.449 10:40:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.708 10:40:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.968 10:40:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:49.968 10:40:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:49.968 true 00:07:49.968 10:40:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:49.968 10:40:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.227 10:40:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.486 10:40:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:50.486 10:40:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:50.745 true 00:07:50.745 10:40:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:50.745 10:40:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.745 10:40:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.005 10:40:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:51.005 10:40:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:51.263 true 00:07:51.263 10:40:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:51.263 10:40:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.522 10:40:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.781 10:40:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:51.781 10:40:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:51.781 true 00:07:51.781 10:40:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:51.782 10:40:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.041 10:40:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.301 10:40:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:52.301 10:40:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:52.301 true 00:07:52.560 10:40:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:52.560 10:40:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.560 10:40:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.819 10:40:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:52.819 10:40:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:53.078 true 00:07:53.078 10:40:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:53.078 10:40:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.337 10:40:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.337 10:40:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:53.337 10:40:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:53.597 true 00:07:53.597 10:40:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:53.597 10:40:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.855 10:40:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.113 10:40:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:54.113 10:40:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:54.113 true 00:07:54.113 10:40:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:54.113 10:40:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.372 10:40:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.631 10:40:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:54.631 10:40:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:54.890 true 00:07:54.890 10:40:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:54.890 10:40:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.150 10:40:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.150 10:40:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:55.150 10:40:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:55.408 true 00:07:55.408 10:40:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:55.408 10:40:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.667 10:40:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.926 10:40:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:55.926 10:40:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:55.926 true 00:07:55.926 10:40:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:55.926 10:40:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.185 10:40:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.449 10:40:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:56.449 10:40:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:56.714 true 00:07:56.714 10:40:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:56.714 10:40:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.714 10:40:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.973 10:40:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:56.973 10:40:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:57.232 true 00:07:57.232 10:40:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:57.232 10:40:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.492 10:40:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.492 10:40:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:57.492 10:40:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:57.751 true 00:07:57.751 10:40:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:57.751 10:40:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.010 10:40:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.270 10:40:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:58.270 10:40:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:58.270 true 00:07:58.529 10:40:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:58.529 10:40:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.529 10:40:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.788 10:40:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:58.788 10:40:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:59.047 true 00:07:59.047 10:40:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:59.047 10:40:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.307 10:40:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.307 10:40:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:59.307 10:40:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:59.566 true 00:07:59.566 10:40:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:07:59.566 10:40:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.825 10:40:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.084 10:40:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:00.084 10:40:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:00.084 true 00:08:00.084 10:40:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:08:00.084 10:40:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.344 10:40:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.603 10:40:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:00.603 10:40:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:00.863 true 00:08:00.863 10:40:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:08:00.863 10:40:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.123 10:40:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.123 10:40:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:01.123 10:40:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:01.381 true 00:08:01.382 10:40:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:08:01.382 10:40:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.641 10:40:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.900 10:40:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:01.900 10:40:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:01.900 true 00:08:01.900 10:40:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:08:01.900 10:40:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.159 10:40:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.419 10:40:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:02.419 10:40:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:02.678 true 00:08:02.678 10:40:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:08:02.678 10:40:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.678 10:40:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.937 10:40:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:08:02.937 10:40:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:08:03.196 true 00:08:03.196 10:40:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:08:03.196 10:40:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.455 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.714 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:08:03.714 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:08:03.714 Initializing NVMe Controllers 00:08:03.714 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:03.714 Controller IO queue size 128, less than required. 00:08:03.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:03.714 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:03.714 Initialization complete. Launching workers. 00:08:03.714 ======================================================== 00:08:03.714 Latency(us) 00:08:03.714 Device Information : IOPS MiB/s Average min max 00:08:03.714 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35708.44 17.44 3584.43 1831.33 6816.86 00:08:03.714 ======================================================== 00:08:03.714 Total : 35708.44 17.44 3584.43 1831.33 6816.86 00:08:03.714 00:08:03.714 true 00:08:03.714 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1508310 00:08:03.714 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1508310) - No such process 00:08:03.714 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1508310 00:08:03.714 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.973 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.232 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:04.232 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:04.232 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:04.232 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.232 10:40:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:04.232 null0 00:08:04.491 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.491 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.491 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:04.491 null1 00:08:04.491 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.491 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.491 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:04.751 null2 00:08:04.751 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.751 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.751 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:05.010 null3 00:08:05.010 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.010 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.010 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:05.010 null4 00:08:05.270 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.270 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.270 10:40:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:05.270 null5 00:08:05.270 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.270 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.270 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:05.530 null6 00:08:05.530 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.530 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.530 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:05.790 null7 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1514323 1514324 1514326 1514328 1514330 1514332 1514334 1514336 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.790 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.050 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.050 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.050 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.051 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.310 10:40:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.310 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.310 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.310 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.310 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.310 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.310 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.310 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.310 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.569 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.829 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.829 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.829 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.829 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.829 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.829 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.829 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.829 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.088 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.089 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.089 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.089 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.089 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.347 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.347 10:40:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.347 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.625 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.625 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.625 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.625 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.626 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.626 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.626 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.626 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.884 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.885 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.144 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.145 10:40:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.404 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.404 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.404 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.404 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.404 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.404 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.404 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.404 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.663 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.664 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.664 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.924 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.184 10:40:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.184 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.184 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.184 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.184 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.184 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.184 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.444 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.703 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.703 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.703 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.703 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.703 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.703 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.703 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.703 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:09.963 rmmod nvme_rdma 00:08:09.963 rmmod nvme_fabrics 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1507752 ']' 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1507752 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1507752 ']' 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1507752 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507752 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507752' 00:08:09.963 killing process with pid 1507752 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1507752 00:08:09.963 10:40:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1507752 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:11.868 00:08:11.868 real 0m50.432s 00:08:11.868 user 3m32.111s 00:08:11.868 sys 0m17.598s 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:11.868 ************************************ 00:08:11.868 END TEST nvmf_ns_hotplug_stress 00:08:11.868 ************************************ 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.868 ************************************ 00:08:11.868 START TEST nvmf_delete_subsystem 00:08:11.868 ************************************ 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:11.868 * Looking for test storage... 00:08:11.868 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.868 --rc genhtml_branch_coverage=1 00:08:11.868 --rc genhtml_function_coverage=1 00:08:11.868 --rc genhtml_legend=1 00:08:11.868 --rc geninfo_all_blocks=1 00:08:11.868 --rc geninfo_unexecuted_blocks=1 00:08:11.868 00:08:11.868 ' 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.868 --rc genhtml_branch_coverage=1 00:08:11.868 --rc genhtml_function_coverage=1 00:08:11.868 --rc genhtml_legend=1 00:08:11.868 --rc geninfo_all_blocks=1 00:08:11.868 --rc geninfo_unexecuted_blocks=1 00:08:11.868 00:08:11.868 ' 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.868 --rc genhtml_branch_coverage=1 00:08:11.868 --rc genhtml_function_coverage=1 00:08:11.868 --rc genhtml_legend=1 00:08:11.868 --rc geninfo_all_blocks=1 00:08:11.868 --rc geninfo_unexecuted_blocks=1 00:08:11.868 00:08:11.868 ' 00:08:11.868 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.868 --rc genhtml_branch_coverage=1 00:08:11.868 --rc genhtml_function_coverage=1 00:08:11.868 --rc genhtml_legend=1 00:08:11.868 --rc geninfo_all_blocks=1 00:08:11.868 --rc geninfo_unexecuted_blocks=1 00:08:11.869 00:08:11.869 ' 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.869 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:11.869 10:40:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:21.854 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:21.854 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.854 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:21.855 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:21.855 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:21.855 10:40:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:21.855 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:21.855 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:21.855 altname enp217s0f0np0 00:08:21.855 altname ens818f0np0 00:08:21.855 inet 192.168.100.8/24 scope global mlx_0_0 00:08:21.855 valid_lft forever preferred_lft forever 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:21.855 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:21.855 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:21.855 altname enp217s0f1np1 00:08:21.855 altname ens818f1np1 00:08:21.855 inet 192.168.100.9/24 scope global mlx_0_1 00:08:21.855 valid_lft forever preferred_lft forever 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:21.855 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:21.856 192.168.100.9' 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:21.856 192.168.100.9' 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:21.856 192.168.100.9' 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1519620 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1519620 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1519620 ']' 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.856 10:40:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.856 [2024-11-28 10:40:59.226952] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:21.856 [2024-11-28 10:40:59.227058] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.856 [2024-11-28 10:40:59.382770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:21.856 [2024-11-28 10:40:59.480160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.856 [2024-11-28 10:40:59.480212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.856 [2024-11-28 10:40:59.480224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.856 [2024-11-28 10:40:59.480238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.856 [2024-11-28 10:40:59.480248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.856 [2024-11-28 10:40:59.482417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.856 [2024-11-28 10:40:59.482425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.856 [2024-11-28 10:41:00.094065] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7fe8cc7bd940) succeed. 00:08:21.856 [2024-11-28 10:41:00.103418] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7fe8cc779940) succeed. 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.856 [2024-11-28 10:41:00.261437] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.856 NULL1 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.856 Delay0 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1519761 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:21.856 10:41:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:21.856 [2024-11-28 10:41:00.419783] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:23.762 10:41:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.762 10:41:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.762 10:41:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.698 NVMe io qpair process completion error 00:08:24.698 NVMe io qpair process completion error 00:08:24.698 NVMe io qpair process completion error 00:08:24.698 NVMe io qpair process completion error 00:08:24.698 NVMe io qpair process completion error 00:08:24.698 NVMe io qpair process completion error 00:08:24.698 10:41:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.698 10:41:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:24.698 10:41:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1519761 00:08:24.698 10:41:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:25.268 10:41:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:25.268 10:41:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1519761 00:08:25.268 10:41:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Write completed with error (sct=0, sc=8) 00:08:25.838 starting I/O failed: -6 00:08:25.838 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 starting I/O failed: -6 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Read completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.839 Write completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 Write completed with error (sct=0, sc=8) 00:08:25.840 Read completed with error (sct=0, sc=8) 00:08:25.840 10:41:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:25.840 10:41:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1519761 00:08:25.840 10:41:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:25.840 Initializing NVMe Controllers 00:08:25.840 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:25.840 Controller IO queue size 128, less than required. 00:08:25.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:25.840 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:25.840 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:25.840 Initialization complete. Launching workers. 00:08:25.840 ======================================================== 00:08:25.840 Latency(us) 00:08:25.840 Device Information : IOPS MiB/s Average min max 00:08:25.840 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.56 0.04 1592808.30 1000278.08 2971493.72 00:08:25.840 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.56 0.04 1594915.50 1001798.73 2973184.23 00:08:25.840 ======================================================== 00:08:25.840 Total : 161.12 0.08 1593861.90 1000278.08 2973184.23 00:08:25.840 00:08:25.840 [2024-11-28 10:41:04.558836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:25.840 [2024-11-28 10:41:04.558901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:08:25.840 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1519761 00:08:26.406 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1519761) - No such process 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1519761 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1519761 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1519761 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.406 [2024-11-28 10:41:05.056163] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1520593 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:26.406 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.406 [2024-11-28 10:41:05.192988] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:26.972 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.972 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:26.972 10:41:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.232 10:41:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.232 10:41:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:27.232 10:41:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.799 10:41:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.799 10:41:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:27.799 10:41:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.368 10:41:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.368 10:41:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:28.368 10:41:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.937 10:41:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.937 10:41:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:28.937 10:41:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.504 10:41:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.504 10:41:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:29.504 10:41:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.763 10:41:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.763 10:41:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:29.763 10:41:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:30.331 10:41:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.331 10:41:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:30.331 10:41:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:30.899 10:41:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.899 10:41:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:30.899 10:41:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.468 10:41:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.468 10:41:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:31.468 10:41:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.036 10:41:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.036 10:41:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:32.036 10:41:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.297 10:41:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.297 10:41:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:32.297 10:41:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.864 10:41:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.864 10:41:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:32.864 10:41:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.432 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.432 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:33.432 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.692 Initializing NVMe Controllers 00:08:33.692 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.692 Controller IO queue size 128, less than required. 00:08:33.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.692 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:33.692 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:33.692 Initialization complete. Launching workers. 00:08:33.692 ======================================================== 00:08:33.692 Latency(us) 00:08:33.692 Device Information : IOPS MiB/s Average min max 00:08:33.692 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001432.82 1000065.10 1004660.32 00:08:33.692 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002969.72 1000121.40 1007011.62 00:08:33.692 ======================================================== 00:08:33.692 Total : 256.00 0.12 1002201.27 1000065.10 1007011.62 00:08:33.692 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1520593 00:08:33.952 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1520593) - No such process 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1520593 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:33.952 rmmod nvme_rdma 00:08:33.952 rmmod nvme_fabrics 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1519620 ']' 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1519620 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1519620 ']' 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1519620 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1519620 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1519620' 00:08:33.952 killing process with pid 1519620 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1519620 00:08:33.952 10:41:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1519620 00:08:35.334 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.334 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:35.334 00:08:35.334 real 0m23.668s 00:08:35.334 user 0m52.349s 00:08:35.334 sys 0m8.086s 00:08:35.334 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.334 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.334 ************************************ 00:08:35.334 END TEST nvmf_delete_subsystem 00:08:35.334 ************************************ 00:08:35.334 10:41:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:35.334 10:41:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:35.334 10:41:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.334 10:41:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.595 ************************************ 00:08:35.595 START TEST nvmf_host_management 00:08:35.595 ************************************ 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:35.595 * Looking for test storage... 00:08:35.595 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:35.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.595 --rc genhtml_branch_coverage=1 00:08:35.595 --rc genhtml_function_coverage=1 00:08:35.595 --rc genhtml_legend=1 00:08:35.595 --rc geninfo_all_blocks=1 00:08:35.595 --rc geninfo_unexecuted_blocks=1 00:08:35.595 00:08:35.595 ' 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:35.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.595 --rc genhtml_branch_coverage=1 00:08:35.595 --rc genhtml_function_coverage=1 00:08:35.595 --rc genhtml_legend=1 00:08:35.595 --rc geninfo_all_blocks=1 00:08:35.595 --rc geninfo_unexecuted_blocks=1 00:08:35.595 00:08:35.595 ' 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:35.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.595 --rc genhtml_branch_coverage=1 00:08:35.595 --rc genhtml_function_coverage=1 00:08:35.595 --rc genhtml_legend=1 00:08:35.595 --rc geninfo_all_blocks=1 00:08:35.595 --rc geninfo_unexecuted_blocks=1 00:08:35.595 00:08:35.595 ' 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:35.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.595 --rc genhtml_branch_coverage=1 00:08:35.595 --rc genhtml_function_coverage=1 00:08:35.595 --rc genhtml_legend=1 00:08:35.595 --rc geninfo_all_blocks=1 00:08:35.595 --rc geninfo_unexecuted_blocks=1 00:08:35.595 00:08:35.595 ' 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.595 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.596 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.596 10:41:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.947 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:43.948 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:43.948 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:43.948 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:43.948 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:43.948 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:44.209 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:44.209 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:44.209 altname enp217s0f0np0 00:08:44.209 altname ens818f0np0 00:08:44.209 inet 192.168.100.8/24 scope global mlx_0_0 00:08:44.209 valid_lft forever preferred_lft forever 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:44.209 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:44.209 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:44.209 altname enp217s0f1np1 00:08:44.209 altname ens818f1np1 00:08:44.209 inet 192.168.100.9/24 scope global mlx_0_1 00:08:44.209 valid_lft forever preferred_lft forever 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:44.209 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:44.210 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:44.210 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:44.210 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:44.210 192.168.100.9' 00:08:44.210 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:44.210 192.168.100.9' 00:08:44.210 10:41:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:44.210 192.168.100.9' 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1526365 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1526365 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1526365 ']' 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.210 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:44.469 [2024-11-28 10:41:23.146671] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:44.469 [2024-11-28 10:41:23.146788] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.469 [2024-11-28 10:41:23.302388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.729 [2024-11-28 10:41:23.403307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.729 [2024-11-28 10:41:23.403359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.729 [2024-11-28 10:41:23.403372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.729 [2024-11-28 10:41:23.403385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.729 [2024-11-28 10:41:23.403395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.729 [2024-11-28 10:41:23.406089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.729 [2024-11-28 10:41:23.406159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.729 [2024-11-28 10:41:23.406242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.729 [2024-11-28 10:41:23.406267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:45.298 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.298 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:45.298 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.298 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.298 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.298 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.298 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:45.298 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.298 10:41:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.298 [2024-11-28 10:41:24.039949] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fe766b3e940) succeed. 00:08:45.298 [2024-11-28 10:41:24.050167] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fe7661bd940) succeed. 00:08:45.557 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.557 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:45.557 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.557 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.557 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:45.557 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:45.557 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:45.557 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.557 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.557 Malloc0 00:08:45.557 [2024-11-28 10:41:24.426716] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1526674 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1526674 /var/tmp/bdevperf.sock 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1526674 ']' 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.817 { 00:08:45.817 "params": { 00:08:45.817 "name": "Nvme$subsystem", 00:08:45.817 "trtype": "$TEST_TRANSPORT", 00:08:45.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.817 "adrfam": "ipv4", 00:08:45.817 "trsvcid": "$NVMF_PORT", 00:08:45.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.817 "hdgst": ${hdgst:-false}, 00:08:45.817 "ddgst": ${ddgst:-false} 00:08:45.817 }, 00:08:45.817 "method": "bdev_nvme_attach_controller" 00:08:45.817 } 00:08:45.817 EOF 00:08:45.817 )") 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:45.817 10:41:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.817 "params": { 00:08:45.817 "name": "Nvme0", 00:08:45.817 "trtype": "rdma", 00:08:45.817 "traddr": "192.168.100.8", 00:08:45.817 "adrfam": "ipv4", 00:08:45.817 "trsvcid": "4420", 00:08:45.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:45.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:45.817 "hdgst": false, 00:08:45.817 "ddgst": false 00:08:45.817 }, 00:08:45.817 "method": "bdev_nvme_attach_controller" 00:08:45.817 }' 00:08:45.817 [2024-11-28 10:41:24.567029] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:45.817 [2024-11-28 10:41:24.567117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526674 ] 00:08:46.076 [2024-11-28 10:41:24.719010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.076 [2024-11-28 10:41:24.827893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.643 Running I/O for 10 seconds... 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=432 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 432 -ge 100 ']' 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:46.643 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.644 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.644 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.644 10:41:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:47.841 512.00 IOPS, 32.00 MiB/s [2024-11-28T09:41:26.722Z] [2024-11-28 10:41:26.469984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cff480 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cef3c0 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cdf300 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf240 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf180 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf0c0 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f000 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8ef40 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7ee80 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6edc0 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5ed00 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4ec40 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3eb80 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2eac0 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1ea00 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0e940 len:0x10000 key:0x181800 00:08:47.841 [2024-11-28 10:41:26.470477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000feffc0 len:0x10000 key:0x181a00 00:08:47.841 [2024-11-28 10:41:26.470503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff00 len:0x10000 key:0x181a00 00:08:47.841 [2024-11-28 10:41:26.470529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000beffc0 len:0x10000 key:0x181700 00:08:47.841 [2024-11-28 10:41:26.470554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.841 [2024-11-28 10:41:26.470568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b214000 len:0x10000 key:0x182900 00:08:47.841 [2024-11-28 10:41:26.470579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b1f3000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b1d2000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b92b000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b90a000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8e9000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c8000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a7000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b886000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b865000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b844000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b823000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b802000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7e1000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7c0000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b79f000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.470983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.470998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb9e000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7d000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb5c000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb3b000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb1a000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf9000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad8000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab7000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba96000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba75000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba54000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba33000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba12000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9f1000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9d0000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9af000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdae000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8d000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd6c000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd4b000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd2a000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.842 [2024-11-28 10:41:26.471565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd09000 len:0x10000 key:0x182900 00:08:47.842 [2024-11-28 10:41:26.471576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.843 [2024-11-28 10:41:26.471590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bce8000 len:0x10000 key:0x182900 00:08:47.843 [2024-11-28 10:41:26.471603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.843 [2024-11-28 10:41:26.471622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc7000 len:0x10000 key:0x182900 00:08:47.843 [2024-11-28 10:41:26.471634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.843 [2024-11-28 10:41:26.471648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca6000 len:0x10000 key:0x182900 00:08:47.843 [2024-11-28 10:41:26.471660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.843 [2024-11-28 10:41:26.471674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc85000 len:0x10000 key:0x182900 00:08:47.843 [2024-11-28 10:41:26.471686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.843 [2024-11-28 10:41:26.471700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc64000 len:0x10000 key:0x182900 00:08:47.843 [2024-11-28 10:41:26.471712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.843 [2024-11-28 10:41:26.471726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc43000 len:0x10000 key:0x182900 00:08:47.843 [2024-11-28 10:41:26.471738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.843 [2024-11-28 10:41:26.471751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc22000 len:0x10000 key:0x182900 00:08:47.843 [2024-11-28 10:41:26.471763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.843 [2024-11-28 10:41:26.474904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:47.843 task offset: 71296 on job bdev=Nvme0n1 fails 00:08:47.843 00:08:47.843 Latency(us) 00:08:47.843 [2024-11-28T09:41:26.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.843 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:47.843 Job: Nvme0n1 ended in about 1.21 seconds with error 00:08:47.843 Verification LBA range: start 0x0 length 0x400 00:08:47.843 Nvme0n1 : 1.21 421.65 26.35 52.71 0.00 133977.57 2490.37 1013343.85 00:08:47.843 [2024-11-28T09:41:26.724Z] =================================================================================================================== 00:08:47.843 [2024-11-28T09:41:26.724Z] Total : 421.65 26.35 52.71 0.00 133977.57 2490.37 1013343.85 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1526674 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:47.843 { 00:08:47.843 "params": { 00:08:47.843 "name": "Nvme$subsystem", 00:08:47.843 "trtype": "$TEST_TRANSPORT", 00:08:47.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.843 "adrfam": "ipv4", 00:08:47.843 "trsvcid": "$NVMF_PORT", 00:08:47.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.843 "hdgst": ${hdgst:-false}, 00:08:47.843 "ddgst": ${ddgst:-false} 00:08:47.843 }, 00:08:47.843 "method": "bdev_nvme_attach_controller" 00:08:47.843 } 00:08:47.843 EOF 00:08:47.843 )") 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:47.843 10:41:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:47.843 "params": { 00:08:47.843 "name": "Nvme0", 00:08:47.843 "trtype": "rdma", 00:08:47.843 "traddr": "192.168.100.8", 00:08:47.843 "adrfam": "ipv4", 00:08:47.843 "trsvcid": "4420", 00:08:47.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:47.843 "hdgst": false, 00:08:47.843 "ddgst": false 00:08:47.843 }, 00:08:47.843 "method": "bdev_nvme_attach_controller" 00:08:47.843 }' 00:08:47.843 [2024-11-28 10:41:26.573747] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:47.843 [2024-11-28 10:41:26.573851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526959 ] 00:08:48.102 [2024-11-28 10:41:26.729509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.102 [2024-11-28 10:41:26.833117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.361 Running I/O for 1 seconds... 00:08:49.738 2688.00 IOPS, 168.00 MiB/s 00:08:49.738 Latency(us) 00:08:49.738 [2024-11-28T09:41:28.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.738 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:49.738 Verification LBA range: start 0x0 length 0x400 00:08:49.738 Nvme0n1 : 1.02 2739.88 171.24 0.00 0.00 22869.74 1146.88 46976.20 00:08:49.738 [2024-11-28T09:41:28.619Z] =================================================================================================================== 00:08:49.738 [2024-11-28T09:41:28.619Z] Total : 2739.88 171.24 0.00 0.00 22869.74 1146.88 46976.20 00:08:50.307 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1526674 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:50.307 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:50.307 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:50.566 rmmod nvme_rdma 00:08:50.566 rmmod nvme_fabrics 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1526365 ']' 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1526365 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1526365 ']' 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1526365 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1526365 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1526365' 00:08:50.566 killing process with pid 1526365 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1526365 00:08:50.566 10:41:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1526365 00:08:52.474 [2024-11-28 10:41:31.076887] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:52.474 00:08:52.474 real 0m16.941s 00:08:52.474 user 0m36.024s 00:08:52.474 sys 0m8.110s 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.474 ************************************ 00:08:52.474 END TEST nvmf_host_management 00:08:52.474 ************************************ 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.474 ************************************ 00:08:52.474 START TEST nvmf_lvol 00:08:52.474 ************************************ 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:52.474 * Looking for test storage... 00:08:52.474 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.474 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.735 --rc genhtml_branch_coverage=1 00:08:52.735 --rc genhtml_function_coverage=1 00:08:52.735 --rc genhtml_legend=1 00:08:52.735 --rc geninfo_all_blocks=1 00:08:52.735 --rc geninfo_unexecuted_blocks=1 00:08:52.735 00:08:52.735 ' 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.735 --rc genhtml_branch_coverage=1 00:08:52.735 --rc genhtml_function_coverage=1 00:08:52.735 --rc genhtml_legend=1 00:08:52.735 --rc geninfo_all_blocks=1 00:08:52.735 --rc geninfo_unexecuted_blocks=1 00:08:52.735 00:08:52.735 ' 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.735 --rc genhtml_branch_coverage=1 00:08:52.735 --rc genhtml_function_coverage=1 00:08:52.735 --rc genhtml_legend=1 00:08:52.735 --rc geninfo_all_blocks=1 00:08:52.735 --rc geninfo_unexecuted_blocks=1 00:08:52.735 00:08:52.735 ' 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.735 --rc genhtml_branch_coverage=1 00:08:52.735 --rc genhtml_function_coverage=1 00:08:52.735 --rc genhtml_legend=1 00:08:52.735 --rc geninfo_all_blocks=1 00:08:52.735 --rc geninfo_unexecuted_blocks=1 00:08:52.735 00:08:52.735 ' 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.735 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.736 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.736 10:41:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:02.736 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:02.736 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:02.736 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:02.736 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:02.736 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:02.737 10:41:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:02.737 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:02.737 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:02.737 altname enp217s0f0np0 00:09:02.737 altname ens818f0np0 00:09:02.737 inet 192.168.100.8/24 scope global mlx_0_0 00:09:02.737 valid_lft forever preferred_lft forever 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:02.737 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:02.737 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:02.737 altname enp217s0f1np1 00:09:02.737 altname ens818f1np1 00:09:02.737 inet 192.168.100.9/24 scope global mlx_0_1 00:09:02.737 valid_lft forever preferred_lft forever 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:02.737 192.168.100.9' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:02.737 192.168.100.9' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:02.737 192.168.100.9' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1531938 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1531938 00:09:02.737 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:02.738 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1531938 ']' 00:09:02.738 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.738 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.738 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.738 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.738 10:41:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:02.738 [2024-11-28 10:41:40.317890] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:02.738 [2024-11-28 10:41:40.317987] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.738 [2024-11-28 10:41:40.474803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:02.738 [2024-11-28 10:41:40.576836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.738 [2024-11-28 10:41:40.576888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.738 [2024-11-28 10:41:40.576901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.738 [2024-11-28 10:41:40.576914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.738 [2024-11-28 10:41:40.576924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.738 [2024-11-28 10:41:40.579280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.738 [2024-11-28 10:41:40.579360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.738 [2024-11-28 10:41:40.579364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.738 10:41:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.738 10:41:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:02.738 10:41:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.738 10:41:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.738 10:41:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:02.738 10:41:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.738 10:41:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:02.738 [2024-11-28 10:41:41.353559] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f659f948940) succeed. 00:09:02.738 [2024-11-28 10:41:41.362803] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f659f904940) succeed. 00:09:02.738 10:41:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.997 10:41:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:02.997 10:41:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.256 10:41:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:03.256 10:41:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:03.515 10:41:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:03.774 10:41:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=32b28023-d45e-4aa0-9e2f-2dabbd79fba8 00:09:03.774 10:41:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 32b28023-d45e-4aa0-9e2f-2dabbd79fba8 lvol 20 00:09:04.034 10:41:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=57903713-74b1-4d49-a476-f3ec2bed94a2 00:09:04.034 10:41:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:04.034 10:41:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 57903713-74b1-4d49-a476-f3ec2bed94a2 00:09:04.293 10:41:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:04.553 [2024-11-28 10:41:43.262010] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:04.553 10:41:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:04.811 10:41:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1532511 00:09:04.811 10:41:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:04.811 10:41:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:05.748 10:41:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 57903713-74b1-4d49-a476-f3ec2bed94a2 MY_SNAPSHOT 00:09:06.008 10:41:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1a805f68-aaba-4e66-9027-c4492adb1610 00:09:06.008 10:41:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 57903713-74b1-4d49-a476-f3ec2bed94a2 30 00:09:06.268 10:41:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1a805f68-aaba-4e66-9027-c4492adb1610 MY_CLONE 00:09:06.268 10:41:45 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4ed88c21-25aa-456d-a321-0a81702300f4 00:09:06.268 10:41:45 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4ed88c21-25aa-456d-a321-0a81702300f4 00:09:06.835 10:41:45 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1532511 00:09:16.819 Initializing NVMe Controllers 00:09:16.820 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:16.820 Controller IO queue size 128, less than required. 00:09:16.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:16.820 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:16.820 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:16.820 Initialization complete. Launching workers. 00:09:16.820 ======================================================== 00:09:16.820 Latency(us) 00:09:16.820 Device Information : IOPS MiB/s Average min max 00:09:16.820 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15058.50 58.82 8500.62 3405.05 159173.07 00:09:16.820 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14957.40 58.43 8557.61 147.10 145156.49 00:09:16.820 ======================================================== 00:09:16.820 Total : 30015.90 117.25 8529.02 147.10 159173.07 00:09:16.820 00:09:16.820 10:41:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 57903713-74b1-4d49-a476-f3ec2bed94a2 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32b28023-d45e-4aa0-9e2f-2dabbd79fba8 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:16.820 rmmod nvme_rdma 00:09:16.820 rmmod nvme_fabrics 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1531938 ']' 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1531938 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1531938 ']' 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1531938 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1531938 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1531938' 00:09:16.820 killing process with pid 1531938 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1531938 00:09:16.820 10:41:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1531938 00:09:18.726 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:18.726 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:18.726 00:09:18.726 real 0m26.303s 00:09:18.726 user 1m17.221s 00:09:18.726 sys 0m8.159s 00:09:18.726 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.726 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:18.726 ************************************ 00:09:18.726 END TEST nvmf_lvol 00:09:18.726 ************************************ 00:09:18.726 10:41:57 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:18.726 10:41:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:18.726 10:41:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.726 10:41:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.985 ************************************ 00:09:18.986 START TEST nvmf_lvs_grow 00:09:18.986 ************************************ 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:18.986 * Looking for test storage... 00:09:18.986 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.986 --rc genhtml_branch_coverage=1 00:09:18.986 --rc genhtml_function_coverage=1 00:09:18.986 --rc genhtml_legend=1 00:09:18.986 --rc geninfo_all_blocks=1 00:09:18.986 --rc geninfo_unexecuted_blocks=1 00:09:18.986 00:09:18.986 ' 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.986 --rc genhtml_branch_coverage=1 00:09:18.986 --rc genhtml_function_coverage=1 00:09:18.986 --rc genhtml_legend=1 00:09:18.986 --rc geninfo_all_blocks=1 00:09:18.986 --rc geninfo_unexecuted_blocks=1 00:09:18.986 00:09:18.986 ' 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.986 --rc genhtml_branch_coverage=1 00:09:18.986 --rc genhtml_function_coverage=1 00:09:18.986 --rc genhtml_legend=1 00:09:18.986 --rc geninfo_all_blocks=1 00:09:18.986 --rc geninfo_unexecuted_blocks=1 00:09:18.986 00:09:18.986 ' 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.986 --rc genhtml_branch_coverage=1 00:09:18.986 --rc genhtml_function_coverage=1 00:09:18.986 --rc genhtml_legend=1 00:09:18.986 --rc geninfo_all_blocks=1 00:09:18.986 --rc geninfo_unexecuted_blocks=1 00:09:18.986 00:09:18.986 ' 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.986 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.986 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.987 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.987 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.245 10:41:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:27.363 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:27.363 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:27.364 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:27.364 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:27.364 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:27.364 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:27.624 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:27.624 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:27.624 altname enp217s0f0np0 00:09:27.624 altname ens818f0np0 00:09:27.624 inet 192.168.100.8/24 scope global mlx_0_0 00:09:27.624 valid_lft forever preferred_lft forever 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:27.624 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:27.624 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:27.624 altname enp217s0f1np1 00:09:27.624 altname ens818f1np1 00:09:27.624 inet 192.168.100.9/24 scope global mlx_0_1 00:09:27.624 valid_lft forever preferred_lft forever 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:27.624 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:27.625 192.168.100.9' 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:27.625 192.168.100.9' 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:27.625 192.168.100.9' 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1539640 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1539640 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1539640 ']' 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.625 10:42:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.884 [2024-11-28 10:42:06.533861] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:27.884 [2024-11-28 10:42:06.533976] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.884 [2024-11-28 10:42:06.688396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.144 [2024-11-28 10:42:06.783357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.144 [2024-11-28 10:42:06.783406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.144 [2024-11-28 10:42:06.783418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.144 [2024-11-28 10:42:06.783431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.144 [2024-11-28 10:42:06.783440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.144 [2024-11-28 10:42:06.784783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.713 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.713 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:28.713 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.713 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.713 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.713 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.713 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:28.713 [2024-11-28 10:42:07.566896] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f8501531940) succeed. 00:09:28.713 [2024-11-28 10:42:07.575577] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f85013bd940) succeed. 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.972 ************************************ 00:09:28.972 START TEST lvs_grow_clean 00:09:28.972 ************************************ 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.972 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.232 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:29.232 10:42:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:29.492 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:29.492 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:29.492 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:29.492 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:29.492 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:29.492 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b lvol 150 00:09:29.751 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=05d7d678-7624-49df-99e5-a34d10e41a8c 00:09:29.751 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.751 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:30.010 [2024-11-28 10:42:08.671750] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:30.010 [2024-11-28 10:42:08.671830] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:30.010 true 00:09:30.010 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:30.010 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:30.010 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:30.010 10:42:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:30.270 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 05d7d678-7624-49df-99e5-a34d10e41a8c 00:09:30.529 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:30.529 [2024-11-28 10:42:09.410256] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1540218 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1540218 /var/tmp/bdevperf.sock 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1540218 ']' 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.788 10:42:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:31.048 [2024-11-28 10:42:09.684683] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:31.048 [2024-11-28 10:42:09.684775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540218 ] 00:09:31.048 [2024-11-28 10:42:09.837830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.308 [2024-11-28 10:42:09.938378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.876 10:42:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.876 10:42:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:31.877 10:42:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:31.877 Nvme0n1 00:09:32.137 10:42:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:32.137 [ 00:09:32.137 { 00:09:32.137 "name": "Nvme0n1", 00:09:32.137 "aliases": [ 00:09:32.137 "05d7d678-7624-49df-99e5-a34d10e41a8c" 00:09:32.137 ], 00:09:32.137 "product_name": "NVMe disk", 00:09:32.137 "block_size": 4096, 00:09:32.137 "num_blocks": 38912, 00:09:32.137 "uuid": "05d7d678-7624-49df-99e5-a34d10e41a8c", 00:09:32.137 "numa_id": 1, 00:09:32.137 "assigned_rate_limits": { 00:09:32.137 "rw_ios_per_sec": 0, 00:09:32.137 "rw_mbytes_per_sec": 0, 00:09:32.137 "r_mbytes_per_sec": 0, 00:09:32.137 "w_mbytes_per_sec": 0 00:09:32.137 }, 00:09:32.137 "claimed": false, 00:09:32.137 "zoned": false, 00:09:32.137 "supported_io_types": { 00:09:32.137 "read": true, 00:09:32.137 "write": true, 00:09:32.137 "unmap": true, 00:09:32.137 "flush": true, 00:09:32.137 "reset": true, 00:09:32.137 "nvme_admin": true, 00:09:32.137 "nvme_io": true, 00:09:32.137 "nvme_io_md": false, 00:09:32.137 "write_zeroes": true, 00:09:32.137 "zcopy": false, 00:09:32.137 "get_zone_info": false, 00:09:32.137 "zone_management": false, 00:09:32.137 "zone_append": false, 00:09:32.137 "compare": true, 00:09:32.137 "compare_and_write": true, 00:09:32.137 "abort": true, 00:09:32.137 "seek_hole": false, 00:09:32.137 "seek_data": false, 00:09:32.137 "copy": true, 00:09:32.137 "nvme_iov_md": false 00:09:32.137 }, 00:09:32.137 "memory_domains": [ 00:09:32.137 { 00:09:32.137 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:32.137 "dma_device_type": 0 00:09:32.137 } 00:09:32.137 ], 00:09:32.137 "driver_specific": { 00:09:32.137 "nvme": [ 00:09:32.137 { 00:09:32.137 "trid": { 00:09:32.137 "trtype": "RDMA", 00:09:32.137 "adrfam": "IPv4", 00:09:32.137 "traddr": "192.168.100.8", 00:09:32.137 "trsvcid": "4420", 00:09:32.137 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:32.137 }, 00:09:32.137 "ctrlr_data": { 00:09:32.137 "cntlid": 1, 00:09:32.137 "vendor_id": "0x8086", 00:09:32.137 "model_number": "SPDK bdev Controller", 00:09:32.137 "serial_number": "SPDK0", 00:09:32.137 "firmware_revision": "25.01", 00:09:32.137 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:32.137 "oacs": { 00:09:32.137 "security": 0, 00:09:32.137 "format": 0, 00:09:32.137 "firmware": 0, 00:09:32.137 "ns_manage": 0 00:09:32.137 }, 00:09:32.137 "multi_ctrlr": true, 00:09:32.137 "ana_reporting": false 00:09:32.137 }, 00:09:32.137 "vs": { 00:09:32.137 "nvme_version": "1.3" 00:09:32.137 }, 00:09:32.137 "ns_data": { 00:09:32.137 "id": 1, 00:09:32.137 "can_share": true 00:09:32.137 } 00:09:32.137 } 00:09:32.137 ], 00:09:32.137 "mp_policy": "active_passive" 00:09:32.137 } 00:09:32.137 } 00:09:32.137 ] 00:09:32.137 10:42:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1540484 00:09:32.137 10:42:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:32.137 10:42:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:32.396 Running I/O for 10 seconds... 00:09:33.334 Latency(us) 00:09:33.334 [2024-11-28T09:42:12.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.334 Nvme0n1 : 1.00 30120.00 117.66 0.00 0.00 0.00 0.00 0.00 00:09:33.334 [2024-11-28T09:42:12.215Z] =================================================================================================================== 00:09:33.334 [2024-11-28T09:42:12.215Z] Total : 30120.00 117.66 0.00 0.00 0.00 0.00 0.00 00:09:33.334 00:09:34.272 10:42:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:34.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.272 Nvme0n1 : 2.00 30496.50 119.13 0.00 0.00 0.00 0.00 0.00 00:09:34.272 [2024-11-28T09:42:13.153Z] =================================================================================================================== 00:09:34.272 [2024-11-28T09:42:13.153Z] Total : 30496.50 119.13 0.00 0.00 0.00 0.00 0.00 00:09:34.272 00:09:34.272 true 00:09:34.272 10:42:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:34.272 10:42:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:34.531 10:42:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:34.531 10:42:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:34.531 10:42:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1540484 00:09:35.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.469 Nvme0n1 : 3.00 30560.33 119.38 0.00 0.00 0.00 0.00 0.00 00:09:35.469 [2024-11-28T09:42:14.350Z] =================================================================================================================== 00:09:35.469 [2024-11-28T09:42:14.350Z] Total : 30560.33 119.38 0.00 0.00 0.00 0.00 0.00 00:09:35.469 00:09:36.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.417 Nvme0n1 : 4.00 30680.00 119.84 0.00 0.00 0.00 0.00 0.00 00:09:36.417 [2024-11-28T09:42:15.298Z] =================================================================================================================== 00:09:36.417 [2024-11-28T09:42:15.298Z] Total : 30680.00 119.84 0.00 0.00 0.00 0.00 0.00 00:09:36.417 00:09:37.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.355 Nvme0n1 : 5.00 30759.60 120.15 0.00 0.00 0.00 0.00 0.00 00:09:37.355 [2024-11-28T09:42:16.236Z] =================================================================================================================== 00:09:37.355 [2024-11-28T09:42:16.236Z] Total : 30759.60 120.15 0.00 0.00 0.00 0.00 0.00 00:09:37.355 00:09:38.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.292 Nvme0n1 : 6.00 30827.67 120.42 0.00 0.00 0.00 0.00 0.00 00:09:38.292 [2024-11-28T09:42:17.173Z] =================================================================================================================== 00:09:38.292 [2024-11-28T09:42:17.173Z] Total : 30827.67 120.42 0.00 0.00 0.00 0.00 0.00 00:09:38.292 00:09:39.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.227 Nvme0n1 : 7.00 30876.29 120.61 0.00 0.00 0.00 0.00 0.00 00:09:39.227 [2024-11-28T09:42:18.108Z] =================================================================================================================== 00:09:39.227 [2024-11-28T09:42:18.108Z] Total : 30876.29 120.61 0.00 0.00 0.00 0.00 0.00 00:09:39.227 00:09:40.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.624 Nvme0n1 : 8.00 30921.25 120.79 0.00 0.00 0.00 0.00 0.00 00:09:40.624 [2024-11-28T09:42:19.505Z] =================================================================================================================== 00:09:40.624 [2024-11-28T09:42:19.505Z] Total : 30921.25 120.79 0.00 0.00 0.00 0.00 0.00 00:09:40.624 00:09:41.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.239 Nvme0n1 : 9.00 30927.33 120.81 0.00 0.00 0.00 0.00 0.00 00:09:41.239 [2024-11-28T09:42:20.120Z] =================================================================================================================== 00:09:41.239 [2024-11-28T09:42:20.120Z] Total : 30927.33 120.81 0.00 0.00 0.00 0.00 0.00 00:09:41.239 00:09:42.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.176 Nvme0n1 : 10.00 30887.30 120.65 0.00 0.00 0.00 0.00 0.00 00:09:42.176 [2024-11-28T09:42:21.057Z] =================================================================================================================== 00:09:42.176 [2024-11-28T09:42:21.057Z] Total : 30887.30 120.65 0.00 0.00 0.00 0.00 0.00 00:09:42.176 00:09:42.435 00:09:42.435 Latency(us) 00:09:42.435 [2024-11-28T09:42:21.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.435 Nvme0n1 : 10.00 30887.47 120.65 0.00 0.00 4140.67 2818.05 18769.51 00:09:42.435 [2024-11-28T09:42:21.316Z] =================================================================================================================== 00:09:42.435 [2024-11-28T09:42:21.316Z] Total : 30887.47 120.65 0.00 0.00 4140.67 2818.05 18769.51 00:09:42.435 { 00:09:42.435 "results": [ 00:09:42.435 { 00:09:42.435 "job": "Nvme0n1", 00:09:42.435 "core_mask": "0x2", 00:09:42.435 "workload": "randwrite", 00:09:42.435 "status": "finished", 00:09:42.435 "queue_depth": 128, 00:09:42.435 "io_size": 4096, 00:09:42.435 "runtime": 10.003637, 00:09:42.435 "iops": 30887.46622853268, 00:09:42.435 "mibps": 120.65416495520579, 00:09:42.435 "io_failed": 0, 00:09:42.435 "io_timeout": 0, 00:09:42.435 "avg_latency_us": 4140.665914483134, 00:09:42.435 "min_latency_us": 2818.048, 00:09:42.435 "max_latency_us": 18769.5104 00:09:42.435 } 00:09:42.435 ], 00:09:42.435 "core_count": 1 00:09:42.435 } 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1540218 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1540218 ']' 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1540218 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540218 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540218' 00:09:42.435 killing process with pid 1540218 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1540218 00:09:42.435 Received shutdown signal, test time was about 10.000000 seconds 00:09:42.435 00:09:42.435 Latency(us) 00:09:42.435 [2024-11-28T09:42:21.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.435 [2024-11-28T09:42:21.316Z] =================================================================================================================== 00:09:42.435 [2024-11-28T09:42:21.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:42.435 10:42:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1540218 00:09:43.372 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:43.373 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:43.631 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:43.631 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:43.890 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:43.890 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:43.890 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.890 [2024-11-28 10:42:22.761965] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:44.149 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:44.149 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:44.149 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:44.149 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:44.149 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.149 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:44.149 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.149 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:44.149 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.149 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:44.150 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:44.150 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:44.150 request: 00:09:44.150 { 00:09:44.150 "uuid": "1ce52607-fef1-44a3-8f6f-2c094c89bd8b", 00:09:44.150 "method": "bdev_lvol_get_lvstores", 00:09:44.150 "req_id": 1 00:09:44.150 } 00:09:44.150 Got JSON-RPC error response 00:09:44.150 response: 00:09:44.150 { 00:09:44.150 "code": -19, 00:09:44.150 "message": "No such device" 00:09:44.150 } 00:09:44.150 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:44.150 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:44.150 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:44.150 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:44.150 10:42:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.408 aio_bdev 00:09:44.408 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 05d7d678-7624-49df-99e5-a34d10e41a8c 00:09:44.408 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=05d7d678-7624-49df-99e5-a34d10e41a8c 00:09:44.408 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.408 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:44.408 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.408 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.408 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:44.666 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 05d7d678-7624-49df-99e5-a34d10e41a8c -t 2000 00:09:44.667 [ 00:09:44.667 { 00:09:44.667 "name": "05d7d678-7624-49df-99e5-a34d10e41a8c", 00:09:44.667 "aliases": [ 00:09:44.667 "lvs/lvol" 00:09:44.667 ], 00:09:44.667 "product_name": "Logical Volume", 00:09:44.667 "block_size": 4096, 00:09:44.667 "num_blocks": 38912, 00:09:44.667 "uuid": "05d7d678-7624-49df-99e5-a34d10e41a8c", 00:09:44.667 "assigned_rate_limits": { 00:09:44.667 "rw_ios_per_sec": 0, 00:09:44.667 "rw_mbytes_per_sec": 0, 00:09:44.667 "r_mbytes_per_sec": 0, 00:09:44.667 "w_mbytes_per_sec": 0 00:09:44.667 }, 00:09:44.667 "claimed": false, 00:09:44.667 "zoned": false, 00:09:44.667 "supported_io_types": { 00:09:44.667 "read": true, 00:09:44.667 "write": true, 00:09:44.667 "unmap": true, 00:09:44.667 "flush": false, 00:09:44.667 "reset": true, 00:09:44.667 "nvme_admin": false, 00:09:44.667 "nvme_io": false, 00:09:44.667 "nvme_io_md": false, 00:09:44.667 "write_zeroes": true, 00:09:44.667 "zcopy": false, 00:09:44.667 "get_zone_info": false, 00:09:44.667 "zone_management": false, 00:09:44.667 "zone_append": false, 00:09:44.667 "compare": false, 00:09:44.667 "compare_and_write": false, 00:09:44.667 "abort": false, 00:09:44.667 "seek_hole": true, 00:09:44.667 "seek_data": true, 00:09:44.667 "copy": false, 00:09:44.667 "nvme_iov_md": false 00:09:44.667 }, 00:09:44.667 "driver_specific": { 00:09:44.667 "lvol": { 00:09:44.667 "lvol_store_uuid": "1ce52607-fef1-44a3-8f6f-2c094c89bd8b", 00:09:44.667 "base_bdev": "aio_bdev", 00:09:44.667 "thin_provision": false, 00:09:44.667 "num_allocated_clusters": 38, 00:09:44.667 "snapshot": false, 00:09:44.667 "clone": false, 00:09:44.667 "esnap_clone": false 00:09:44.667 } 00:09:44.667 } 00:09:44.667 } 00:09:44.667 ] 00:09:44.926 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:44.926 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:44.926 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.926 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.926 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:44.926 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:45.185 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:45.185 10:42:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 05d7d678-7624-49df-99e5-a34d10e41a8c 00:09:45.445 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ce52607-fef1-44a3-8f6f-2c094c89bd8b 00:09:45.445 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.704 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.704 00:09:45.704 real 0m16.831s 00:09:45.704 user 0m16.549s 00:09:45.704 sys 0m1.375s 00:09:45.704 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.704 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:45.704 ************************************ 00:09:45.704 END TEST lvs_grow_clean 00:09:45.704 ************************************ 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.963 ************************************ 00:09:45.963 START TEST lvs_grow_dirty 00:09:45.963 ************************************ 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.963 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:46.222 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:46.222 10:42:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:46.222 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=08e30fcb-5665-4b84-a4cd-944e596f7147 00:09:46.222 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:09:46.222 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:46.481 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:46.482 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:46.482 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08e30fcb-5665-4b84-a4cd-944e596f7147 lvol 150 00:09:46.740 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e249e4fb-c189-47ad-a4e6-023f2a745a1a 00:09:46.740 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.740 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:46.740 [2024-11-28 10:42:25.560600] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:46.740 [2024-11-28 10:42:25.560688] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:46.740 true 00:09:46.740 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:09:46.740 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:46.998 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:46.999 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:47.258 10:42:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e249e4fb-c189-47ad-a4e6-023f2a745a1a 00:09:47.258 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:47.517 [2024-11-28 10:42:26.263056] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:47.517 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:47.777 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1543233 00:09:47.777 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:47.777 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:47.777 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1543233 /var/tmp/bdevperf.sock 00:09:47.777 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1543233 ']' 00:09:47.777 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:47.777 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.777 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:47.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:47.777 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.777 10:42:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:47.777 [2024-11-28 10:42:26.544238] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:47.777 [2024-11-28 10:42:26.544328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543233 ] 00:09:48.036 [2024-11-28 10:42:26.697089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.036 [2024-11-28 10:42:26.797252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.605 10:42:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.605 10:42:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:48.605 10:42:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:48.864 Nvme0n1 00:09:48.864 10:42:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:49.124 [ 00:09:49.124 { 00:09:49.124 "name": "Nvme0n1", 00:09:49.124 "aliases": [ 00:09:49.124 "e249e4fb-c189-47ad-a4e6-023f2a745a1a" 00:09:49.124 ], 00:09:49.124 "product_name": "NVMe disk", 00:09:49.124 "block_size": 4096, 00:09:49.124 "num_blocks": 38912, 00:09:49.124 "uuid": "e249e4fb-c189-47ad-a4e6-023f2a745a1a", 00:09:49.124 "numa_id": 1, 00:09:49.124 "assigned_rate_limits": { 00:09:49.124 "rw_ios_per_sec": 0, 00:09:49.124 "rw_mbytes_per_sec": 0, 00:09:49.124 "r_mbytes_per_sec": 0, 00:09:49.125 "w_mbytes_per_sec": 0 00:09:49.125 }, 00:09:49.125 "claimed": false, 00:09:49.125 "zoned": false, 00:09:49.125 "supported_io_types": { 00:09:49.125 "read": true, 00:09:49.125 "write": true, 00:09:49.125 "unmap": true, 00:09:49.125 "flush": true, 00:09:49.125 "reset": true, 00:09:49.125 "nvme_admin": true, 00:09:49.125 "nvme_io": true, 00:09:49.125 "nvme_io_md": false, 00:09:49.125 "write_zeroes": true, 00:09:49.125 "zcopy": false, 00:09:49.125 "get_zone_info": false, 00:09:49.125 "zone_management": false, 00:09:49.125 "zone_append": false, 00:09:49.125 "compare": true, 00:09:49.125 "compare_and_write": true, 00:09:49.125 "abort": true, 00:09:49.125 "seek_hole": false, 00:09:49.125 "seek_data": false, 00:09:49.125 "copy": true, 00:09:49.125 "nvme_iov_md": false 00:09:49.125 }, 00:09:49.125 "memory_domains": [ 00:09:49.125 { 00:09:49.125 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:49.125 "dma_device_type": 0 00:09:49.125 } 00:09:49.125 ], 00:09:49.125 "driver_specific": { 00:09:49.125 "nvme": [ 00:09:49.125 { 00:09:49.125 "trid": { 00:09:49.125 "trtype": "RDMA", 00:09:49.125 "adrfam": "IPv4", 00:09:49.125 "traddr": "192.168.100.8", 00:09:49.125 "trsvcid": "4420", 00:09:49.125 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:49.125 }, 00:09:49.125 "ctrlr_data": { 00:09:49.125 "cntlid": 1, 00:09:49.125 "vendor_id": "0x8086", 00:09:49.125 "model_number": "SPDK bdev Controller", 00:09:49.125 "serial_number": "SPDK0", 00:09:49.125 "firmware_revision": "25.01", 00:09:49.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:49.125 "oacs": { 00:09:49.125 "security": 0, 00:09:49.125 "format": 0, 00:09:49.125 "firmware": 0, 00:09:49.125 "ns_manage": 0 00:09:49.125 }, 00:09:49.125 "multi_ctrlr": true, 00:09:49.125 "ana_reporting": false 00:09:49.125 }, 00:09:49.125 "vs": { 00:09:49.125 "nvme_version": "1.3" 00:09:49.125 }, 00:09:49.125 "ns_data": { 00:09:49.125 "id": 1, 00:09:49.125 "can_share": true 00:09:49.125 } 00:09:49.125 } 00:09:49.125 ], 00:09:49.125 "mp_policy": "active_passive" 00:09:49.125 } 00:09:49.125 } 00:09:49.125 ] 00:09:49.125 10:42:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1543501 00:09:49.125 10:42:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:49.125 10:42:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:49.125 Running I/O for 10 seconds... 00:09:50.063 Latency(us) 00:09:50.063 [2024-11-28T09:42:28.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.063 Nvme0n1 : 1.00 30081.00 117.50 0.00 0.00 0.00 0.00 0.00 00:09:50.063 [2024-11-28T09:42:28.944Z] =================================================================================================================== 00:09:50.063 [2024-11-28T09:42:28.945Z] Total : 30081.00 117.50 0.00 0.00 0.00 0.00 0.00 00:09:50.064 00:09:51.002 10:42:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:09:51.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.261 Nvme0n1 : 2.00 30464.50 119.00 0.00 0.00 0.00 0.00 0.00 00:09:51.261 [2024-11-28T09:42:30.142Z] =================================================================================================================== 00:09:51.261 [2024-11-28T09:42:30.142Z] Total : 30464.50 119.00 0.00 0.00 0.00 0.00 0.00 00:09:51.261 00:09:51.261 true 00:09:51.261 10:42:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:09:51.261 10:42:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:51.521 10:42:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:51.521 10:42:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:51.521 10:42:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1543501 00:09:52.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.089 Nvme0n1 : 3.00 30293.33 118.33 0.00 0.00 0.00 0.00 0.00 00:09:52.089 [2024-11-28T09:42:30.970Z] =================================================================================================================== 00:09:52.089 [2024-11-28T09:42:30.970Z] Total : 30293.33 118.33 0.00 0.00 0.00 0.00 0.00 00:09:52.089 00:09:53.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.027 Nvme0n1 : 4.00 30448.00 118.94 0.00 0.00 0.00 0.00 0.00 00:09:53.027 [2024-11-28T09:42:31.908Z] =================================================================================================================== 00:09:53.027 [2024-11-28T09:42:31.908Z] Total : 30448.00 118.94 0.00 0.00 0.00 0.00 0.00 00:09:53.027 00:09:54.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.406 Nvme0n1 : 5.00 30573.60 119.43 0.00 0.00 0.00 0.00 0.00 00:09:54.406 [2024-11-28T09:42:33.287Z] =================================================================================================================== 00:09:54.406 [2024-11-28T09:42:33.287Z] Total : 30573.60 119.43 0.00 0.00 0.00 0.00 0.00 00:09:54.406 00:09:55.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.342 Nvme0n1 : 6.00 30662.17 119.77 0.00 0.00 0.00 0.00 0.00 00:09:55.342 [2024-11-28T09:42:34.223Z] =================================================================================================================== 00:09:55.342 [2024-11-28T09:42:34.223Z] Total : 30662.17 119.77 0.00 0.00 0.00 0.00 0.00 00:09:55.342 00:09:56.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.280 Nvme0n1 : 7.00 30738.00 120.07 0.00 0.00 0.00 0.00 0.00 00:09:56.280 [2024-11-28T09:42:35.161Z] =================================================================================================================== 00:09:56.280 [2024-11-28T09:42:35.161Z] Total : 30738.00 120.07 0.00 0.00 0.00 0.00 0.00 00:09:56.280 00:09:57.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.215 Nvme0n1 : 8.00 30784.38 120.25 0.00 0.00 0.00 0.00 0.00 00:09:57.215 [2024-11-28T09:42:36.096Z] =================================================================================================================== 00:09:57.215 [2024-11-28T09:42:36.097Z] Total : 30784.38 120.25 0.00 0.00 0.00 0.00 0.00 00:09:57.216 00:09:58.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.154 Nvme0n1 : 9.00 30812.67 120.36 0.00 0.00 0.00 0.00 0.00 00:09:58.154 [2024-11-28T09:42:37.035Z] =================================================================================================================== 00:09:58.154 [2024-11-28T09:42:37.035Z] Total : 30812.67 120.36 0.00 0.00 0.00 0.00 0.00 00:09:58.154 00:09:59.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.093 Nvme0n1 : 10.00 30850.50 120.51 0.00 0.00 0.00 0.00 0.00 00:09:59.093 [2024-11-28T09:42:37.974Z] =================================================================================================================== 00:09:59.093 [2024-11-28T09:42:37.974Z] Total : 30850.50 120.51 0.00 0.00 0.00 0.00 0.00 00:09:59.093 00:09:59.093 00:09:59.093 Latency(us) 00:09:59.093 [2024-11-28T09:42:37.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.093 Nvme0n1 : 10.00 30852.10 120.52 0.00 0.00 4145.61 2595.23 17511.22 00:09:59.093 [2024-11-28T09:42:37.974Z] =================================================================================================================== 00:09:59.093 [2024-11-28T09:42:37.974Z] Total : 30852.10 120.52 0.00 0.00 4145.61 2595.23 17511.22 00:09:59.093 { 00:09:59.093 "results": [ 00:09:59.093 { 00:09:59.093 "job": "Nvme0n1", 00:09:59.093 "core_mask": "0x2", 00:09:59.093 "workload": "randwrite", 00:09:59.093 "status": "finished", 00:09:59.093 "queue_depth": 128, 00:09:59.093 "io_size": 4096, 00:09:59.093 "runtime": 10.00363, 00:09:59.093 "iops": 30852.100687450456, 00:09:59.093 "mibps": 120.51601831035335, 00:09:59.093 "io_failed": 0, 00:09:59.093 "io_timeout": 0, 00:09:59.093 "avg_latency_us": 4145.6120300914035, 00:09:59.093 "min_latency_us": 2595.2256, 00:09:59.093 "max_latency_us": 17511.2192 00:09:59.093 } 00:09:59.093 ], 00:09:59.093 "core_count": 1 00:09:59.093 } 00:09:59.093 10:42:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1543233 00:09:59.093 10:42:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1543233 ']' 00:09:59.093 10:42:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1543233 00:09:59.093 10:42:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:59.093 10:42:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.093 10:42:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1543233 00:09:59.352 10:42:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:59.352 10:42:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:59.352 10:42:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1543233' 00:09:59.352 killing process with pid 1543233 00:09:59.352 10:42:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1543233 00:09:59.352 Received shutdown signal, test time was about 10.000000 seconds 00:09:59.352 00:09:59.352 Latency(us) 00:09:59.352 [2024-11-28T09:42:38.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.352 [2024-11-28T09:42:38.233Z] =================================================================================================================== 00:09:59.352 [2024-11-28T09:42:38.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:59.352 10:42:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1543233 00:10:00.291 10:42:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:00.291 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:00.550 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:10:00.550 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:00.809 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1539640 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1539640 00:10:00.810 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1539640 Killed "${NVMF_APP[@]}" "$@" 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1545480 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1545480 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1545480 ']' 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.810 10:42:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:00.810 [2024-11-28 10:42:39.627082] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:00.810 [2024-11-28 10:42:39.627177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.069 [2024-11-28 10:42:39.788348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.069 [2024-11-28 10:42:39.889600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.069 [2024-11-28 10:42:39.889650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.069 [2024-11-28 10:42:39.889665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.069 [2024-11-28 10:42:39.889677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.069 [2024-11-28 10:42:39.889687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.069 [2024-11-28 10:42:39.891036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.638 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.638 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:01.638 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.638 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.638 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.638 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.638 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:01.897 [2024-11-28 10:42:40.644215] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:01.897 [2024-11-28 10:42:40.644385] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:01.897 [2024-11-28 10:42:40.644426] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:01.897 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:01.897 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e249e4fb-c189-47ad-a4e6-023f2a745a1a 00:10:01.897 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e249e4fb-c189-47ad-a4e6-023f2a745a1a 00:10:01.897 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.897 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:01.897 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.897 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.897 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:02.158 10:42:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e249e4fb-c189-47ad-a4e6-023f2a745a1a -t 2000 00:10:02.158 [ 00:10:02.158 { 00:10:02.158 "name": "e249e4fb-c189-47ad-a4e6-023f2a745a1a", 00:10:02.158 "aliases": [ 00:10:02.158 "lvs/lvol" 00:10:02.158 ], 00:10:02.158 "product_name": "Logical Volume", 00:10:02.158 "block_size": 4096, 00:10:02.158 "num_blocks": 38912, 00:10:02.158 "uuid": "e249e4fb-c189-47ad-a4e6-023f2a745a1a", 00:10:02.158 "assigned_rate_limits": { 00:10:02.158 "rw_ios_per_sec": 0, 00:10:02.158 "rw_mbytes_per_sec": 0, 00:10:02.158 "r_mbytes_per_sec": 0, 00:10:02.158 "w_mbytes_per_sec": 0 00:10:02.158 }, 00:10:02.158 "claimed": false, 00:10:02.158 "zoned": false, 00:10:02.158 "supported_io_types": { 00:10:02.158 "read": true, 00:10:02.158 "write": true, 00:10:02.158 "unmap": true, 00:10:02.158 "flush": false, 00:10:02.158 "reset": true, 00:10:02.158 "nvme_admin": false, 00:10:02.158 "nvme_io": false, 00:10:02.158 "nvme_io_md": false, 00:10:02.158 "write_zeroes": true, 00:10:02.158 "zcopy": false, 00:10:02.158 "get_zone_info": false, 00:10:02.158 "zone_management": false, 00:10:02.158 "zone_append": false, 00:10:02.158 "compare": false, 00:10:02.158 "compare_and_write": false, 00:10:02.158 "abort": false, 00:10:02.158 "seek_hole": true, 00:10:02.158 "seek_data": true, 00:10:02.158 "copy": false, 00:10:02.158 "nvme_iov_md": false 00:10:02.158 }, 00:10:02.158 "driver_specific": { 00:10:02.158 "lvol": { 00:10:02.158 "lvol_store_uuid": "08e30fcb-5665-4b84-a4cd-944e596f7147", 00:10:02.158 "base_bdev": "aio_bdev", 00:10:02.158 "thin_provision": false, 00:10:02.158 "num_allocated_clusters": 38, 00:10:02.158 "snapshot": false, 00:10:02.158 "clone": false, 00:10:02.158 "esnap_clone": false 00:10:02.158 } 00:10:02.158 } 00:10:02.158 } 00:10:02.158 ] 00:10:02.158 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:02.158 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:10:02.158 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:02.418 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:02.418 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:10:02.418 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:02.678 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:02.678 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:02.678 [2024-11-28 10:42:41.540224] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:10:02.938 request: 00:10:02.938 { 00:10:02.938 "uuid": "08e30fcb-5665-4b84-a4cd-944e596f7147", 00:10:02.938 "method": "bdev_lvol_get_lvstores", 00:10:02.938 "req_id": 1 00:10:02.938 } 00:10:02.938 Got JSON-RPC error response 00:10:02.938 response: 00:10:02.938 { 00:10:02.938 "code": -19, 00:10:02.938 "message": "No such device" 00:10:02.938 } 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.938 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:03.197 aio_bdev 00:10:03.197 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e249e4fb-c189-47ad-a4e6-023f2a745a1a 00:10:03.197 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e249e4fb-c189-47ad-a4e6-023f2a745a1a 00:10:03.197 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.197 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:03.197 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.197 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.197 10:42:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:03.457 10:42:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e249e4fb-c189-47ad-a4e6-023f2a745a1a -t 2000 00:10:03.457 [ 00:10:03.457 { 00:10:03.457 "name": "e249e4fb-c189-47ad-a4e6-023f2a745a1a", 00:10:03.457 "aliases": [ 00:10:03.457 "lvs/lvol" 00:10:03.457 ], 00:10:03.457 "product_name": "Logical Volume", 00:10:03.457 "block_size": 4096, 00:10:03.457 "num_blocks": 38912, 00:10:03.457 "uuid": "e249e4fb-c189-47ad-a4e6-023f2a745a1a", 00:10:03.457 "assigned_rate_limits": { 00:10:03.457 "rw_ios_per_sec": 0, 00:10:03.457 "rw_mbytes_per_sec": 0, 00:10:03.457 "r_mbytes_per_sec": 0, 00:10:03.457 "w_mbytes_per_sec": 0 00:10:03.457 }, 00:10:03.457 "claimed": false, 00:10:03.457 "zoned": false, 00:10:03.457 "supported_io_types": { 00:10:03.457 "read": true, 00:10:03.457 "write": true, 00:10:03.457 "unmap": true, 00:10:03.457 "flush": false, 00:10:03.457 "reset": true, 00:10:03.457 "nvme_admin": false, 00:10:03.457 "nvme_io": false, 00:10:03.457 "nvme_io_md": false, 00:10:03.457 "write_zeroes": true, 00:10:03.457 "zcopy": false, 00:10:03.457 "get_zone_info": false, 00:10:03.457 "zone_management": false, 00:10:03.457 "zone_append": false, 00:10:03.457 "compare": false, 00:10:03.457 "compare_and_write": false, 00:10:03.457 "abort": false, 00:10:03.457 "seek_hole": true, 00:10:03.457 "seek_data": true, 00:10:03.457 "copy": false, 00:10:03.457 "nvme_iov_md": false 00:10:03.457 }, 00:10:03.457 "driver_specific": { 00:10:03.457 "lvol": { 00:10:03.457 "lvol_store_uuid": "08e30fcb-5665-4b84-a4cd-944e596f7147", 00:10:03.457 "base_bdev": "aio_bdev", 00:10:03.457 "thin_provision": false, 00:10:03.457 "num_allocated_clusters": 38, 00:10:03.457 "snapshot": false, 00:10:03.457 "clone": false, 00:10:03.457 "esnap_clone": false 00:10:03.457 } 00:10:03.457 } 00:10:03.457 } 00:10:03.457 ] 00:10:03.457 10:42:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:03.457 10:42:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:10:03.457 10:42:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:03.717 10:42:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:03.717 10:42:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:10:03.717 10:42:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:03.976 10:42:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:03.976 10:42:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e249e4fb-c189-47ad-a4e6-023f2a745a1a 00:10:03.976 10:42:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08e30fcb-5665-4b84-a4cd-944e596f7147 00:10:04.235 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:04.495 00:10:04.495 real 0m18.613s 00:10:04.495 user 0m48.541s 00:10:04.495 sys 0m3.416s 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:04.495 ************************************ 00:10:04.495 END TEST lvs_grow_dirty 00:10:04.495 ************************************ 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:04.495 nvmf_trace.0 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:04.495 rmmod nvme_rdma 00:10:04.495 rmmod nvme_fabrics 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1545480 ']' 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1545480 00:10:04.495 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1545480 ']' 00:10:04.496 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1545480 00:10:04.496 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:04.755 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.755 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1545480 00:10:04.755 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.755 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.755 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1545480' 00:10:04.755 killing process with pid 1545480 00:10:04.755 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1545480 00:10:04.755 10:42:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1545480 00:10:05.695 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.695 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:05.695 00:10:05.695 real 0m46.837s 00:10:05.695 user 1m12.698s 00:10:05.695 sys 0m11.990s 00:10:05.695 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.695 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:05.695 ************************************ 00:10:05.695 END TEST nvmf_lvs_grow 00:10:05.695 ************************************ 00:10:05.695 10:42:44 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:10:05.695 10:42:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.695 10:42:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.695 10:42:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.695 ************************************ 00:10:05.695 START TEST nvmf_bdev_io_wait 00:10:05.695 ************************************ 00:10:05.695 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:10:05.955 * Looking for test storage... 00:10:05.955 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:05.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.955 --rc genhtml_branch_coverage=1 00:10:05.955 --rc genhtml_function_coverage=1 00:10:05.955 --rc genhtml_legend=1 00:10:05.955 --rc geninfo_all_blocks=1 00:10:05.955 --rc geninfo_unexecuted_blocks=1 00:10:05.955 00:10:05.955 ' 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:05.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.955 --rc genhtml_branch_coverage=1 00:10:05.955 --rc genhtml_function_coverage=1 00:10:05.955 --rc genhtml_legend=1 00:10:05.955 --rc geninfo_all_blocks=1 00:10:05.955 --rc geninfo_unexecuted_blocks=1 00:10:05.955 00:10:05.955 ' 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:05.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.955 --rc genhtml_branch_coverage=1 00:10:05.955 --rc genhtml_function_coverage=1 00:10:05.955 --rc genhtml_legend=1 00:10:05.955 --rc geninfo_all_blocks=1 00:10:05.955 --rc geninfo_unexecuted_blocks=1 00:10:05.955 00:10:05.955 ' 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:05.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.955 --rc genhtml_branch_coverage=1 00:10:05.955 --rc genhtml_function_coverage=1 00:10:05.955 --rc genhtml_legend=1 00:10:05.955 --rc geninfo_all_blocks=1 00:10:05.955 --rc geninfo_unexecuted_blocks=1 00:10:05.955 00:10:05.955 ' 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.955 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.956 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.956 10:42:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.937 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.937 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.937 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:15.938 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:15.938 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:15.938 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:15.938 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:15.938 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:15.939 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:15.939 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:15.939 altname enp217s0f0np0 00:10:15.939 altname ens818f0np0 00:10:15.939 inet 192.168.100.8/24 scope global mlx_0_0 00:10:15.939 valid_lft forever preferred_lft forever 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:15.939 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:15.939 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:15.939 altname enp217s0f1np1 00:10:15.939 altname ens818f1np1 00:10:15.939 inet 192.168.100.9/24 scope global mlx_0_1 00:10:15.939 valid_lft forever preferred_lft forever 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:15.939 192.168.100.9' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:15.939 192.168.100.9' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:15.939 192.168.100.9' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1550443 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1550443 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1550443 ']' 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.939 10:42:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.939 [2024-11-28 10:42:53.477132] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:15.939 [2024-11-28 10:42:53.477251] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.939 [2024-11-28 10:42:53.636701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.939 [2024-11-28 10:42:53.740806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.939 [2024-11-28 10:42:53.740860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.939 [2024-11-28 10:42:53.740873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.939 [2024-11-28 10:42:53.740886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.939 [2024-11-28 10:42:53.740896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.939 [2024-11-28 10:42:53.743530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.939 [2024-11-28 10:42:53.743616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.939 [2024-11-28 10:42:53.743669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.939 [2024-11-28 10:42:53.743677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:15.939 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.940 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.940 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.940 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:15.940 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.940 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.940 [2024-11-28 10:42:54.572753] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fcd5d9bd940) succeed. 00:10:15.940 [2024-11-28 10:42:54.582985] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fcd5d979940) succeed. 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:16.200 Malloc0 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:16.200 [2024-11-28 10:42:54.961476] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1550769 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1550772 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:16.200 { 00:10:16.200 "params": { 00:10:16.200 "name": "Nvme$subsystem", 00:10:16.200 "trtype": "$TEST_TRANSPORT", 00:10:16.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.200 "adrfam": "ipv4", 00:10:16.200 "trsvcid": "$NVMF_PORT", 00:10:16.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.200 "hdgst": ${hdgst:-false}, 00:10:16.200 "ddgst": ${ddgst:-false} 00:10:16.200 }, 00:10:16.200 "method": "bdev_nvme_attach_controller" 00:10:16.200 } 00:10:16.200 EOF 00:10:16.200 )") 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1550775 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:16.200 { 00:10:16.200 "params": { 00:10:16.200 "name": "Nvme$subsystem", 00:10:16.200 "trtype": "$TEST_TRANSPORT", 00:10:16.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.200 "adrfam": "ipv4", 00:10:16.200 "trsvcid": "$NVMF_PORT", 00:10:16.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.200 "hdgst": ${hdgst:-false}, 00:10:16.200 "ddgst": ${ddgst:-false} 00:10:16.200 }, 00:10:16.200 "method": "bdev_nvme_attach_controller" 00:10:16.200 } 00:10:16.200 EOF 00:10:16.200 )") 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:16.200 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1550779 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:16.201 { 00:10:16.201 "params": { 00:10:16.201 "name": "Nvme$subsystem", 00:10:16.201 "trtype": "$TEST_TRANSPORT", 00:10:16.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.201 "adrfam": "ipv4", 00:10:16.201 "trsvcid": "$NVMF_PORT", 00:10:16.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.201 "hdgst": ${hdgst:-false}, 00:10:16.201 "ddgst": ${ddgst:-false} 00:10:16.201 }, 00:10:16.201 "method": "bdev_nvme_attach_controller" 00:10:16.201 } 00:10:16.201 EOF 00:10:16.201 )") 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:16.201 { 00:10:16.201 "params": { 00:10:16.201 "name": "Nvme$subsystem", 00:10:16.201 "trtype": "$TEST_TRANSPORT", 00:10:16.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.201 "adrfam": "ipv4", 00:10:16.201 "trsvcid": "$NVMF_PORT", 00:10:16.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.201 "hdgst": ${hdgst:-false}, 00:10:16.201 "ddgst": ${ddgst:-false} 00:10:16.201 }, 00:10:16.201 "method": "bdev_nvme_attach_controller" 00:10:16.201 } 00:10:16.201 EOF 00:10:16.201 )") 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1550769 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:16.201 "params": { 00:10:16.201 "name": "Nvme1", 00:10:16.201 "trtype": "rdma", 00:10:16.201 "traddr": "192.168.100.8", 00:10:16.201 "adrfam": "ipv4", 00:10:16.201 "trsvcid": "4420", 00:10:16.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.201 "hdgst": false, 00:10:16.201 "ddgst": false 00:10:16.201 }, 00:10:16.201 "method": "bdev_nvme_attach_controller" 00:10:16.201 }' 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:16.201 "params": { 00:10:16.201 "name": "Nvme1", 00:10:16.201 "trtype": "rdma", 00:10:16.201 "traddr": "192.168.100.8", 00:10:16.201 "adrfam": "ipv4", 00:10:16.201 "trsvcid": "4420", 00:10:16.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.201 "hdgst": false, 00:10:16.201 "ddgst": false 00:10:16.201 }, 00:10:16.201 "method": "bdev_nvme_attach_controller" 00:10:16.201 }' 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:16.201 "params": { 00:10:16.201 "name": "Nvme1", 00:10:16.201 "trtype": "rdma", 00:10:16.201 "traddr": "192.168.100.8", 00:10:16.201 "adrfam": "ipv4", 00:10:16.201 "trsvcid": "4420", 00:10:16.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.201 "hdgst": false, 00:10:16.201 "ddgst": false 00:10:16.201 }, 00:10:16.201 "method": "bdev_nvme_attach_controller" 00:10:16.201 }' 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:16.201 10:42:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:16.201 "params": { 00:10:16.201 "name": "Nvme1", 00:10:16.201 "trtype": "rdma", 00:10:16.201 "traddr": "192.168.100.8", 00:10:16.201 "adrfam": "ipv4", 00:10:16.201 "trsvcid": "4420", 00:10:16.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.201 "hdgst": false, 00:10:16.201 "ddgst": false 00:10:16.201 }, 00:10:16.201 "method": "bdev_nvme_attach_controller" 00:10:16.201 }' 00:10:16.201 [2024-11-28 10:42:55.049229] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:16.201 [2024-11-28 10:42:55.049327] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:16.201 [2024-11-28 10:42:55.051055] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:16.201 [2024-11-28 10:42:55.051145] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:16.201 [2024-11-28 10:42:55.054629] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:16.201 [2024-11-28 10:42:55.054709] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-28 10:42:55.054693] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:16.201 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:16.201 [2024-11-28 10:42:55.054766] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:16.459 [2024-11-28 10:42:55.327187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.719 [2024-11-28 10:42:55.419639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.719 [2024-11-28 10:42:55.431034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.719 [2024-11-28 10:42:55.516037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:16.719 [2024-11-28 10:42:55.523113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.719 [2024-11-28 10:42:55.577594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.979 [2024-11-28 10:42:55.642518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:16.979 [2024-11-28 10:42:55.675626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:17.238 Running I/O for 1 seconds... 00:10:17.238 Running I/O for 1 seconds... 00:10:17.238 Running I/O for 1 seconds... 00:10:17.238 Running I/O for 1 seconds... 00:10:18.179 16998.00 IOPS, 66.40 MiB/s 00:10:18.179 Latency(us) 00:10:18.179 [2024-11-28T09:42:57.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.179 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:18.179 Nvme1n1 : 1.01 17035.22 66.54 0.00 0.00 7488.01 4744.81 20971.52 00:10:18.179 [2024-11-28T09:42:57.060Z] =================================================================================================================== 00:10:18.179 [2024-11-28T09:42:57.060Z] Total : 17035.22 66.54 0.00 0.00 7488.01 4744.81 20971.52 00:10:18.179 15753.00 IOPS, 61.54 MiB/s 00:10:18.179 Latency(us) 00:10:18.179 [2024-11-28T09:42:57.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.179 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:18.179 Nvme1n1 : 1.01 15802.93 61.73 0.00 0.00 8073.42 4613.73 22858.96 00:10:18.179 [2024-11-28T09:42:57.060Z] =================================================================================================================== 00:10:18.179 [2024-11-28T09:42:57.060Z] Total : 15802.93 61.73 0.00 0.00 8073.42 4613.73 22858.96 00:10:18.179 226152.00 IOPS, 883.41 MiB/s 00:10:18.179 Latency(us) 00:10:18.179 [2024-11-28T09:42:57.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.179 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:18.179 Nvme1n1 : 1.00 225788.80 881.99 0.00 0.00 564.15 253.95 2569.01 00:10:18.179 [2024-11-28T09:42:57.060Z] =================================================================================================================== 00:10:18.179 [2024-11-28T09:42:57.060Z] Total : 225788.80 881.99 0.00 0.00 564.15 253.95 2569.01 00:10:18.437 15059.00 IOPS, 58.82 MiB/s 00:10:18.437 Latency(us) 00:10:18.437 [2024-11-28T09:42:57.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.437 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:18.437 Nvme1n1 : 1.01 15131.59 59.11 0.00 0.00 8435.60 3827.30 27053.26 00:10:18.437 [2024-11-28T09:42:57.318Z] =================================================================================================================== 00:10:18.437 [2024-11-28T09:42:57.318Z] Total : 15131.59 59.11 0.00 0.00 8435.60 3827.30 27053.26 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1550772 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1550775 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1550779 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:19.006 rmmod nvme_rdma 00:10:19.006 rmmod nvme_fabrics 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1550443 ']' 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1550443 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1550443 ']' 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1550443 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.006 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1550443 00:10:19.266 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.266 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.266 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1550443' 00:10:19.266 killing process with pid 1550443 00:10:19.266 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1550443 00:10:19.266 10:42:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1550443 00:10:20.646 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:20.646 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:20.646 00:10:20.646 real 0m14.922s 00:10:20.646 user 0m32.292s 00:10:20.646 sys 0m8.628s 00:10:20.646 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.646 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:20.646 ************************************ 00:10:20.646 END TEST nvmf_bdev_io_wait 00:10:20.646 ************************************ 00:10:20.646 10:42:59 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:20.646 10:42:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.646 10:42:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.646 10:42:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.907 ************************************ 00:10:20.907 START TEST nvmf_queue_depth 00:10:20.907 ************************************ 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:20.907 * Looking for test storage... 00:10:20.907 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.907 --rc genhtml_branch_coverage=1 00:10:20.907 --rc genhtml_function_coverage=1 00:10:20.907 --rc genhtml_legend=1 00:10:20.907 --rc geninfo_all_blocks=1 00:10:20.907 --rc geninfo_unexecuted_blocks=1 00:10:20.907 00:10:20.907 ' 00:10:20.907 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.907 --rc genhtml_branch_coverage=1 00:10:20.907 --rc genhtml_function_coverage=1 00:10:20.907 --rc genhtml_legend=1 00:10:20.908 --rc geninfo_all_blocks=1 00:10:20.908 --rc geninfo_unexecuted_blocks=1 00:10:20.908 00:10:20.908 ' 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.908 --rc genhtml_branch_coverage=1 00:10:20.908 --rc genhtml_function_coverage=1 00:10:20.908 --rc genhtml_legend=1 00:10:20.908 --rc geninfo_all_blocks=1 00:10:20.908 --rc geninfo_unexecuted_blocks=1 00:10:20.908 00:10:20.908 ' 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.908 --rc genhtml_branch_coverage=1 00:10:20.908 --rc genhtml_function_coverage=1 00:10:20.908 --rc genhtml_legend=1 00:10:20.908 --rc geninfo_all_blocks=1 00:10:20.908 --rc geninfo_unexecuted_blocks=1 00:10:20.908 00:10:20.908 ' 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.908 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.908 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.168 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:21.168 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:21.168 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.168 10:42:59 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:29.294 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:29.294 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:29.294 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:29.294 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:10:29.294 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:29.295 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.295 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:29.295 altname enp217s0f0np0 00:10:29.295 altname ens818f0np0 00:10:29.295 inet 192.168.100.8/24 scope global mlx_0_0 00:10:29.295 valid_lft forever preferred_lft forever 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:29.295 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.295 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:29.295 altname enp217s0f1np1 00:10:29.295 altname ens818f1np1 00:10:29.295 inet 192.168.100.9/24 scope global mlx_0_1 00:10:29.295 valid_lft forever preferred_lft forever 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:29.295 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:29.296 192.168.100.9' 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:29.296 192.168.100.9' 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:29.296 192.168.100.9' 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1555710 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1555710 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1555710 ']' 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:29.296 10:43:07 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:29.296 [2024-11-28 10:43:07.940904] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:29.296 [2024-11-28 10:43:07.940995] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.296 [2024-11-28 10:43:08.096627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.555 [2024-11-28 10:43:08.192989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.555 [2024-11-28 10:43:08.193035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.555 [2024-11-28 10:43:08.193048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.555 [2024-11-28 10:43:08.193061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.555 [2024-11-28 10:43:08.193070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.555 [2024-11-28 10:43:08.194521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.124 [2024-11-28 10:43:08.794011] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f305c7bd940) succeed. 00:10:30.124 [2024-11-28 10:43:08.803036] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f305c779940) succeed. 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.124 Malloc0 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.124 [2024-11-28 10:43:08.967933] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1555793 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1555793 /var/tmp/bdevperf.sock 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1555793 ']' 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:30.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:30.124 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.125 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.125 10:43:08 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:30.384 [2024-11-28 10:43:09.051567] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:30.384 [2024-11-28 10:43:09.051681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1555793 ] 00:10:30.384 [2024-11-28 10:43:09.205120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.644 [2024-11-28 10:43:09.307160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.212 10:43:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.212 10:43:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:31.212 10:43:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:31.212 10:43:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.212 10:43:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.212 NVMe0n1 00:10:31.212 10:43:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.212 10:43:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:31.212 Running I/O for 10 seconds... 00:10:33.531 14717.00 IOPS, 57.49 MiB/s [2024-11-28T09:43:13.350Z] 15360.00 IOPS, 60.00 MiB/s [2024-11-28T09:43:14.288Z] 15420.00 IOPS, 60.23 MiB/s [2024-11-28T09:43:15.229Z] 15615.25 IOPS, 61.00 MiB/s [2024-11-28T09:43:16.167Z] 15589.40 IOPS, 60.90 MiB/s [2024-11-28T09:43:17.106Z] 15678.17 IOPS, 61.24 MiB/s [2024-11-28T09:43:18.485Z] 15652.57 IOPS, 61.14 MiB/s [2024-11-28T09:43:19.424Z] 15718.50 IOPS, 61.40 MiB/s [2024-11-28T09:43:20.363Z] 15701.33 IOPS, 61.33 MiB/s [2024-11-28T09:43:20.363Z] 15714.40 IOPS, 61.38 MiB/s 00:10:41.482 Latency(us) 00:10:41.482 [2024-11-28T09:43:20.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.482 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:41.482 Verification LBA range: start 0x0 length 0x4000 00:10:41.482 NVMe0n1 : 10.03 15726.76 61.43 0.00 0.00 64909.99 3132.62 41313.89 00:10:41.482 [2024-11-28T09:43:20.363Z] =================================================================================================================== 00:10:41.482 [2024-11-28T09:43:20.363Z] Total : 15726.76 61.43 0.00 0.00 64909.99 3132.62 41313.89 00:10:41.482 { 00:10:41.482 "results": [ 00:10:41.482 { 00:10:41.482 "job": "NVMe0n1", 00:10:41.482 "core_mask": "0x1", 00:10:41.482 "workload": "verify", 00:10:41.482 "status": "finished", 00:10:41.482 "verify_range": { 00:10:41.482 "start": 0, 00:10:41.482 "length": 16384 00:10:41.482 }, 00:10:41.482 "queue_depth": 1024, 00:10:41.482 "io_size": 4096, 00:10:41.482 "runtime": 10.034806, 00:10:41.482 "iops": 15726.76143415229, 00:10:41.482 "mibps": 61.43266185215738, 00:10:41.482 "io_failed": 0, 00:10:41.482 "io_timeout": 0, 00:10:41.482 "avg_latency_us": 64909.988118361376, 00:10:41.482 "min_latency_us": 3132.6208, 00:10:41.482 "max_latency_us": 41313.8944 00:10:41.482 } 00:10:41.482 ], 00:10:41.482 "core_count": 1 00:10:41.482 } 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1555793 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1555793 ']' 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1555793 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1555793 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1555793' 00:10:41.482 killing process with pid 1555793 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1555793 00:10:41.482 Received shutdown signal, test time was about 10.000000 seconds 00:10:41.482 00:10:41.482 Latency(us) 00:10:41.482 [2024-11-28T09:43:20.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.482 [2024-11-28T09:43:20.363Z] =================================================================================================================== 00:10:41.482 [2024-11-28T09:43:20.363Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:41.482 10:43:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1555793 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:42.421 rmmod nvme_rdma 00:10:42.421 rmmod nvme_fabrics 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1555710 ']' 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1555710 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1555710 ']' 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1555710 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1555710 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1555710' 00:10:42.421 killing process with pid 1555710 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1555710 00:10:42.421 10:43:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1555710 00:10:43.801 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.801 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:43.801 00:10:43.801 real 0m23.006s 00:10:43.801 user 0m28.999s 00:10:43.801 sys 0m7.224s 00:10:43.801 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.801 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.801 ************************************ 00:10:43.801 END TEST nvmf_queue_depth 00:10:43.801 ************************************ 00:10:43.801 10:43:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:43.801 10:43:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.801 10:43:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.801 10:43:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.801 ************************************ 00:10:43.801 START TEST nvmf_target_multipath 00:10:43.801 ************************************ 00:10:43.801 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:44.061 * Looking for test storage... 00:10:44.061 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.061 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.062 --rc genhtml_branch_coverage=1 00:10:44.062 --rc genhtml_function_coverage=1 00:10:44.062 --rc genhtml_legend=1 00:10:44.062 --rc geninfo_all_blocks=1 00:10:44.062 --rc geninfo_unexecuted_blocks=1 00:10:44.062 00:10:44.062 ' 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.062 --rc genhtml_branch_coverage=1 00:10:44.062 --rc genhtml_function_coverage=1 00:10:44.062 --rc genhtml_legend=1 00:10:44.062 --rc geninfo_all_blocks=1 00:10:44.062 --rc geninfo_unexecuted_blocks=1 00:10:44.062 00:10:44.062 ' 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.062 --rc genhtml_branch_coverage=1 00:10:44.062 --rc genhtml_function_coverage=1 00:10:44.062 --rc genhtml_legend=1 00:10:44.062 --rc geninfo_all_blocks=1 00:10:44.062 --rc geninfo_unexecuted_blocks=1 00:10:44.062 00:10:44.062 ' 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.062 --rc genhtml_branch_coverage=1 00:10:44.062 --rc genhtml_function_coverage=1 00:10:44.062 --rc genhtml_legend=1 00:10:44.062 --rc geninfo_all_blocks=1 00:10:44.062 --rc geninfo_unexecuted_blocks=1 00:10:44.062 00:10:44.062 ' 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.062 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.062 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.063 10:43:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.183 10:43:30 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:52.183 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:52.183 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:52.183 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:52.183 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:52.183 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:52.445 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.445 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:52.445 altname enp217s0f0np0 00:10:52.445 altname ens818f0np0 00:10:52.445 inet 192.168.100.8/24 scope global mlx_0_0 00:10:52.445 valid_lft forever preferred_lft forever 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:52.445 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.445 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:52.445 altname enp217s0f1np1 00:10:52.445 altname ens818f1np1 00:10:52.445 inet 192.168.100.9/24 scope global mlx_0_1 00:10:52.445 valid_lft forever preferred_lft forever 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:52.445 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:52.446 192.168.100.9' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:52.446 192.168.100.9' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:52.446 192.168.100.9' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:10:52.446 run this test only with TCP transport for now 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:52.446 rmmod nvme_rdma 00:10:52.446 rmmod nvme_fabrics 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:52.446 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:52.706 00:10:52.706 real 0m8.678s 00:10:52.706 user 0m2.467s 00:10:52.706 sys 0m6.457s 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:52.706 ************************************ 00:10:52.706 END TEST nvmf_target_multipath 00:10:52.706 ************************************ 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.706 ************************************ 00:10:52.706 START TEST nvmf_zcopy 00:10:52.706 ************************************ 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:52.706 * Looking for test storage... 00:10:52.706 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:52.706 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:52.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.966 --rc genhtml_branch_coverage=1 00:10:52.966 --rc genhtml_function_coverage=1 00:10:52.966 --rc genhtml_legend=1 00:10:52.966 --rc geninfo_all_blocks=1 00:10:52.966 --rc geninfo_unexecuted_blocks=1 00:10:52.966 00:10:52.966 ' 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:52.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.966 --rc genhtml_branch_coverage=1 00:10:52.966 --rc genhtml_function_coverage=1 00:10:52.966 --rc genhtml_legend=1 00:10:52.966 --rc geninfo_all_blocks=1 00:10:52.966 --rc geninfo_unexecuted_blocks=1 00:10:52.966 00:10:52.966 ' 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:52.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.966 --rc genhtml_branch_coverage=1 00:10:52.966 --rc genhtml_function_coverage=1 00:10:52.966 --rc genhtml_legend=1 00:10:52.966 --rc geninfo_all_blocks=1 00:10:52.966 --rc geninfo_unexecuted_blocks=1 00:10:52.966 00:10:52.966 ' 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:52.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.966 --rc genhtml_branch_coverage=1 00:10:52.966 --rc genhtml_function_coverage=1 00:10:52.966 --rc genhtml_legend=1 00:10:52.966 --rc geninfo_all_blocks=1 00:10:52.966 --rc geninfo_unexecuted_blocks=1 00:10:52.966 00:10:52.966 ' 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.966 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.967 10:43:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:01.262 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:01.262 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:01.262 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:01.262 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.262 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:01.263 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:01.263 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:01.263 altname enp217s0f0np0 00:11:01.263 altname ens818f0np0 00:11:01.263 inet 192.168.100.8/24 scope global mlx_0_0 00:11:01.263 valid_lft forever preferred_lft forever 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:01.263 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:01.263 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:01.263 altname enp217s0f1np1 00:11:01.263 altname ens818f1np1 00:11:01.263 inet 192.168.100.9/24 scope global mlx_0_1 00:11:01.263 valid_lft forever preferred_lft forever 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:01.263 192.168.100.9' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:01.263 192.168.100.9' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:01.263 192.168.100.9' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:01.263 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:01.264 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:01.264 10:43:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1566432 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1566432 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1566432 ']' 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.264 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.264 [2024-11-28 10:43:40.122185] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:01.264 [2024-11-28 10:43:40.122282] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.523 [2024-11-28 10:43:40.279310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.523 [2024-11-28 10:43:40.378297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.523 [2024-11-28 10:43:40.378350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.523 [2024-11-28 10:43:40.378364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.523 [2024-11-28 10:43:40.378378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.523 [2024-11-28 10:43:40.378387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.523 [2024-11-28 10:43:40.379834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:11:02.090 Unsupported transport: rdma 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:02.090 10:43:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:02.349 nvmf_trace.0 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:02.349 rmmod nvme_rdma 00:11:02.349 rmmod nvme_fabrics 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1566432 ']' 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1566432 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1566432 ']' 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1566432 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566432 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566432' 00:11:02.349 killing process with pid 1566432 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1566432 00:11:02.349 10:43:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1566432 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:03.725 00:11:03.725 real 0m10.770s 00:11:03.725 user 0m4.470s 00:11:03.725 sys 0m7.025s 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.725 ************************************ 00:11:03.725 END TEST nvmf_zcopy 00:11:03.725 ************************************ 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.725 ************************************ 00:11:03.725 START TEST nvmf_nmic 00:11:03.725 ************************************ 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:03.725 * Looking for test storage... 00:11:03.725 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:03.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.725 --rc genhtml_branch_coverage=1 00:11:03.725 --rc genhtml_function_coverage=1 00:11:03.725 --rc genhtml_legend=1 00:11:03.725 --rc geninfo_all_blocks=1 00:11:03.725 --rc geninfo_unexecuted_blocks=1 00:11:03.725 00:11:03.725 ' 00:11:03.725 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:03.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.726 --rc genhtml_branch_coverage=1 00:11:03.726 --rc genhtml_function_coverage=1 00:11:03.726 --rc genhtml_legend=1 00:11:03.726 --rc geninfo_all_blocks=1 00:11:03.726 --rc geninfo_unexecuted_blocks=1 00:11:03.726 00:11:03.726 ' 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:03.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.726 --rc genhtml_branch_coverage=1 00:11:03.726 --rc genhtml_function_coverage=1 00:11:03.726 --rc genhtml_legend=1 00:11:03.726 --rc geninfo_all_blocks=1 00:11:03.726 --rc geninfo_unexecuted_blocks=1 00:11:03.726 00:11:03.726 ' 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:03.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.726 --rc genhtml_branch_coverage=1 00:11:03.726 --rc genhtml_function_coverage=1 00:11:03.726 --rc genhtml_legend=1 00:11:03.726 --rc geninfo_all_blocks=1 00:11:03.726 --rc geninfo_unexecuted_blocks=1 00:11:03.726 00:11:03.726 ' 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.726 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.726 10:43:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:13.711 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:13.711 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:13.711 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:13.711 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:13.711 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:13.712 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:13.712 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:13.712 altname enp217s0f0np0 00:11:13.712 altname ens818f0np0 00:11:13.712 inet 192.168.100.8/24 scope global mlx_0_0 00:11:13.712 valid_lft forever preferred_lft forever 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:13.712 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:13.712 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:13.712 altname enp217s0f1np1 00:11:13.712 altname ens818f1np1 00:11:13.712 inet 192.168.100.9/24 scope global mlx_0_1 00:11:13.712 valid_lft forever preferred_lft forever 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:13.712 192.168.100.9' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:13.712 192.168.100.9' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:13.712 192.168.100.9' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:13.712 10:43:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:13.712 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:13.712 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.712 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.712 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.712 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1570947 00:11:13.712 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.712 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1570947 00:11:13.712 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1570947 ']' 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 [2024-11-28 10:43:51.116515] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:13.713 [2024-11-28 10:43:51.116617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.713 [2024-11-28 10:43:51.270495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.713 [2024-11-28 10:43:51.369623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.713 [2024-11-28 10:43:51.369674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.713 [2024-11-28 10:43:51.369687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.713 [2024-11-28 10:43:51.369698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.713 [2024-11-28 10:43:51.369708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.713 [2024-11-28 10:43:51.372175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.713 [2024-11-28 10:43:51.372194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.713 [2024-11-28 10:43:51.372286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.713 [2024-11-28 10:43:51.372294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.713 10:43:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 [2024-11-28 10:43:52.016257] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f1d1c901940) succeed. 00:11:13.713 [2024-11-28 10:43:52.025764] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f1d1c7bd940) succeed. 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 Malloc0 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 [2024-11-28 10:43:52.391626] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:13.713 test case1: single bdev can't be used in multiple subsystems 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 [2024-11-28 10:43:52.419396] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:13.713 [2024-11-28 10:43:52.419427] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:13.713 [2024-11-28 10:43:52.419441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.713 request: 00:11:13.713 { 00:11:13.713 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:13.713 "namespace": { 00:11:13.713 "bdev_name": "Malloc0", 00:11:13.713 "no_auto_visible": false, 00:11:13.713 "hide_metadata": false 00:11:13.713 }, 00:11:13.713 "method": "nvmf_subsystem_add_ns", 00:11:13.713 "req_id": 1 00:11:13.713 } 00:11:13.713 Got JSON-RPC error response 00:11:13.713 response: 00:11:13.713 { 00:11:13.713 "code": -32602, 00:11:13.713 "message": "Invalid parameters" 00:11:13.713 } 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:13.713 Adding namespace failed - expected result. 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:13.713 test case2: host connect to nvmf target in multiple paths 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.713 [2024-11-28 10:43:52.435468] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.713 10:43:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:14.649 10:43:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:11:15.585 10:43:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.585 10:43:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:15.585 10:43:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.585 10:43:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:15.585 10:43:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:18.131 10:43:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:18.131 10:43:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:18.131 10:43:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.131 10:43:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:18.131 10:43:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.131 10:43:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:18.131 10:43:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:18.131 [global] 00:11:18.131 thread=1 00:11:18.131 invalidate=1 00:11:18.131 rw=write 00:11:18.131 time_based=1 00:11:18.131 runtime=1 00:11:18.131 ioengine=libaio 00:11:18.131 direct=1 00:11:18.131 bs=4096 00:11:18.131 iodepth=1 00:11:18.131 norandommap=0 00:11:18.131 numjobs=1 00:11:18.131 00:11:18.131 verify_dump=1 00:11:18.131 verify_backlog=512 00:11:18.131 verify_state_save=0 00:11:18.131 do_verify=1 00:11:18.131 verify=crc32c-intel 00:11:18.131 [job0] 00:11:18.131 filename=/dev/nvme0n1 00:11:18.131 Could not set queue depth (nvme0n1) 00:11:18.131 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.131 fio-3.35 00:11:18.131 Starting 1 thread 00:11:19.509 00:11:19.509 job0: (groupid=0, jobs=1): err= 0: pid=1572060: Thu Nov 28 10:43:57 2024 00:11:19.509 read: IOPS=6447, BW=25.2MiB/s (26.4MB/s)(25.2MiB/1001msec) 00:11:19.509 slat (nsec): min=8236, max=31834, avg=8722.42, stdev=866.83 00:11:19.509 clat (usec): min=46, max=119, avg=64.91, stdev= 4.27 00:11:19.509 lat (usec): min=63, max=147, avg=73.63, stdev= 4.39 00:11:19.509 clat percentiles (usec): 00:11:19.509 | 1.00th=[ 58], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 62], 00:11:19.509 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 67], 00:11:19.509 | 70.00th=[ 68], 80.00th=[ 69], 90.00th=[ 71], 95.00th=[ 73], 00:11:19.509 | 99.00th=[ 77], 99.50th=[ 79], 99.90th=[ 85], 99.95th=[ 94], 00:11:19.509 | 99.99th=[ 120] 00:11:19.509 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:11:19.509 slat (nsec): min=8547, max=43065, avg=11319.45, stdev=1035.60 00:11:19.509 clat (usec): min=49, max=127, avg=62.33, stdev= 4.32 00:11:19.509 lat (usec): min=62, max=170, avg=73.65, stdev= 4.50 00:11:19.509 clat percentiles (usec): 00:11:19.509 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 59], 00:11:19.509 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:11:19.509 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 70], 00:11:19.509 | 99.00th=[ 75], 99.50th=[ 77], 99.90th=[ 81], 99.95th=[ 86], 00:11:19.509 | 99.99th=[ 128] 00:11:19.509 bw ( KiB/s): min=27864, max=27864, per=100.00%, avg=27864.00, stdev= 0.00, samples=1 00:11:19.509 iops : min= 6966, max= 6966, avg=6966.00, stdev= 0.00, samples=1 00:11:19.509 lat (usec) : 50=0.02%, 100=99.97%, 250=0.02% 00:11:19.509 cpu : usr=11.10%, sys=16.50%, ctx=13110, majf=0, minf=1 00:11:19.509 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.510 issued rwts: total=6454,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.510 00:11:19.510 Run status group 0 (all jobs): 00:11:19.510 READ: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=25.2MiB (26.4MB), run=1001-1001msec 00:11:19.510 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:11:19.510 00:11:19.510 Disk stats (read/write): 00:11:19.510 nvme0n1: ios=5682/6144, merge=0/0, ticks=327/348, in_queue=675, util=90.68% 00:11:19.510 10:43:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:21.417 rmmod nvme_rdma 00:11:21.417 rmmod nvme_fabrics 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1570947 ']' 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1570947 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1570947 ']' 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1570947 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.417 10:43:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1570947 00:11:21.417 10:44:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.417 10:44:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.417 10:44:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1570947' 00:11:21.417 killing process with pid 1570947 00:11:21.417 10:44:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1570947 00:11:21.417 10:44:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1570947 00:11:23.326 10:44:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.326 10:44:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:23.326 00:11:23.326 real 0m19.586s 00:11:23.326 user 0m50.652s 00:11:23.326 sys 0m7.667s 00:11:23.326 10:44:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.326 10:44:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.326 ************************************ 00:11:23.326 END TEST nvmf_nmic 00:11:23.326 ************************************ 00:11:23.326 10:44:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:23.326 10:44:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.326 10:44:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.326 10:44:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.326 ************************************ 00:11:23.326 START TEST nvmf_fio_target 00:11:23.326 ************************************ 00:11:23.326 10:44:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:23.326 * Looking for test storage... 00:11:23.326 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:23.326 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.327 --rc genhtml_branch_coverage=1 00:11:23.327 --rc genhtml_function_coverage=1 00:11:23.327 --rc genhtml_legend=1 00:11:23.327 --rc geninfo_all_blocks=1 00:11:23.327 --rc geninfo_unexecuted_blocks=1 00:11:23.327 00:11:23.327 ' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.327 --rc genhtml_branch_coverage=1 00:11:23.327 --rc genhtml_function_coverage=1 00:11:23.327 --rc genhtml_legend=1 00:11:23.327 --rc geninfo_all_blocks=1 00:11:23.327 --rc geninfo_unexecuted_blocks=1 00:11:23.327 00:11:23.327 ' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.327 --rc genhtml_branch_coverage=1 00:11:23.327 --rc genhtml_function_coverage=1 00:11:23.327 --rc genhtml_legend=1 00:11:23.327 --rc geninfo_all_blocks=1 00:11:23.327 --rc geninfo_unexecuted_blocks=1 00:11:23.327 00:11:23.327 ' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.327 --rc genhtml_branch_coverage=1 00:11:23.327 --rc genhtml_function_coverage=1 00:11:23.327 --rc genhtml_legend=1 00:11:23.327 --rc geninfo_all_blocks=1 00:11:23.327 --rc geninfo_unexecuted_blocks=1 00:11:23.327 00:11:23.327 ' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.327 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.327 10:44:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:31.489 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:31.489 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.489 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:31.490 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:31.490 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:31.490 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:31.490 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:31.490 altname enp217s0f0np0 00:11:31.490 altname ens818f0np0 00:11:31.490 inet 192.168.100.8/24 scope global mlx_0_0 00:11:31.490 valid_lft forever preferred_lft forever 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:31.490 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:31.490 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:31.490 altname enp217s0f1np1 00:11:31.490 altname ens818f1np1 00:11:31.490 inet 192.168.100.9/24 scope global mlx_0_1 00:11:31.490 valid_lft forever preferred_lft forever 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:31.490 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:31.751 192.168.100.9' 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:31.751 192.168.100.9' 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:31.751 192.168.100.9' 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:31.751 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1576925 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1576925 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1576925 ']' 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.752 10:44:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.752 [2024-11-28 10:44:10.573005] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:31.752 [2024-11-28 10:44:10.573104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.012 [2024-11-28 10:44:10.730664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.012 [2024-11-28 10:44:10.831289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.012 [2024-11-28 10:44:10.831337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.012 [2024-11-28 10:44:10.831349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.012 [2024-11-28 10:44:10.831362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.012 [2024-11-28 10:44:10.831371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.012 [2024-11-28 10:44:10.833820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.012 [2024-11-28 10:44:10.833839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.012 [2024-11-28 10:44:10.833930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.012 [2024-11-28 10:44:10.833937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.579 10:44:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.579 10:44:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:32.579 10:44:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.579 10:44:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:32.579 10:44:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.579 10:44:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.579 10:44:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:32.839 [2024-11-28 10:44:11.616073] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fb92e9a4940) succeed. 00:11:32.839 [2024-11-28 10:44:11.626143] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fb92e95d940) succeed. 00:11:33.098 10:44:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:33.358 10:44:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:33.358 10:44:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:33.617 10:44:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:33.617 10:44:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:33.876 10:44:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:33.876 10:44:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:34.135 10:44:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:34.135 10:44:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:34.394 10:44:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:34.654 10:44:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:34.654 10:44:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:34.913 10:44:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:34.913 10:44:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:35.172 10:44:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:35.172 10:44:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:35.431 10:44:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.690 10:44:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:35.690 10:44:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.690 10:44:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:35.690 10:44:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.949 10:44:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:36.212 [2024-11-28 10:44:14.920259] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:36.212 10:44:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:36.469 10:44:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:36.470 10:44:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:37.848 10:44:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:37.848 10:44:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.848 10:44:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.848 10:44:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:37.848 10:44:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:37.848 10:44:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:39.756 10:44:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:39.756 10:44:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:39.756 10:44:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.756 10:44:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:39.756 10:44:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.756 10:44:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:39.756 10:44:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:39.756 [global] 00:11:39.756 thread=1 00:11:39.756 invalidate=1 00:11:39.756 rw=write 00:11:39.756 time_based=1 00:11:39.756 runtime=1 00:11:39.756 ioengine=libaio 00:11:39.756 direct=1 00:11:39.756 bs=4096 00:11:39.756 iodepth=1 00:11:39.756 norandommap=0 00:11:39.756 numjobs=1 00:11:39.756 00:11:39.756 verify_dump=1 00:11:39.756 verify_backlog=512 00:11:39.756 verify_state_save=0 00:11:39.756 do_verify=1 00:11:39.756 verify=crc32c-intel 00:11:39.756 [job0] 00:11:39.756 filename=/dev/nvme0n1 00:11:39.756 [job1] 00:11:39.756 filename=/dev/nvme0n2 00:11:39.756 [job2] 00:11:39.756 filename=/dev/nvme0n3 00:11:39.756 [job3] 00:11:39.756 filename=/dev/nvme0n4 00:11:39.756 Could not set queue depth (nvme0n1) 00:11:39.756 Could not set queue depth (nvme0n2) 00:11:39.756 Could not set queue depth (nvme0n3) 00:11:39.756 Could not set queue depth (nvme0n4) 00:11:40.015 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.016 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.016 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.016 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.016 fio-3.35 00:11:40.016 Starting 4 threads 00:11:41.412 00:11:41.412 job0: (groupid=0, jobs=1): err= 0: pid=1578683: Thu Nov 28 10:44:19 2024 00:11:41.412 read: IOPS=3516, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1001msec) 00:11:41.412 slat (nsec): min=8320, max=21526, avg=9024.92, stdev=846.21 00:11:41.412 clat (usec): min=75, max=233, avg=130.32, stdev=38.16 00:11:41.412 lat (usec): min=83, max=242, avg=139.35, stdev=38.31 00:11:41.412 clat percentiles (usec): 00:11:41.412 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 89], 00:11:41.412 | 30.00th=[ 93], 40.00th=[ 99], 50.00th=[ 141], 60.00th=[ 155], 00:11:41.412 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 188], 00:11:41.412 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 223], 99.95th=[ 227], 00:11:41.412 | 99.99th=[ 233] 00:11:41.412 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:41.412 slat (nsec): min=10558, max=66266, avg=11621.33, stdev=1521.11 00:11:41.412 clat (usec): min=67, max=280, avg=125.47, stdev=36.69 00:11:41.412 lat (usec): min=79, max=291, avg=137.09, stdev=36.77 00:11:41.412 clat percentiles (usec): 00:11:41.412 | 1.00th=[ 76], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 85], 00:11:41.412 | 30.00th=[ 89], 40.00th=[ 101], 50.00th=[ 133], 60.00th=[ 147], 00:11:41.412 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 172], 95.00th=[ 182], 00:11:41.412 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 223], 99.95th=[ 227], 00:11:41.412 | 99.99th=[ 281] 00:11:41.412 bw ( KiB/s): min=17272, max=17272, per=33.77%, avg=17272.00, stdev= 0.00, samples=1 00:11:41.412 iops : min= 4318, max= 4318, avg=4318.00, stdev= 0.00, samples=1 00:11:41.412 lat (usec) : 100=40.16%, 250=59.83%, 500=0.01% 00:11:41.412 cpu : usr=5.50%, sys=9.80%, ctx=7105, majf=0, minf=1 00:11:41.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:41.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.412 issued rwts: total=3520,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:41.412 job1: (groupid=0, jobs=1): err= 0: pid=1578702: Thu Nov 28 10:44:19 2024 00:11:41.412 read: IOPS=2914, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:11:41.412 slat (nsec): min=8156, max=23164, avg=9064.39, stdev=834.61 00:11:41.412 clat (usec): min=79, max=249, avg=160.93, stdev=21.98 00:11:41.412 lat (usec): min=88, max=262, avg=169.99, stdev=22.02 00:11:41.412 clat percentiles (usec): 00:11:41.412 | 1.00th=[ 101], 5.00th=[ 119], 10.00th=[ 130], 20.00th=[ 141], 00:11:41.412 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:11:41.412 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 192], 00:11:41.412 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 239], 99.95th=[ 245], 00:11:41.412 | 99.99th=[ 249] 00:11:41.412 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:41.412 slat (nsec): min=10354, max=38233, avg=11382.86, stdev=1245.43 00:11:41.412 clat (usec): min=75, max=232, avg=148.24, stdev=23.28 00:11:41.412 lat (usec): min=87, max=244, avg=159.62, stdev=23.31 00:11:41.412 clat percentiles (usec): 00:11:41.412 | 1.00th=[ 94], 5.00th=[ 108], 10.00th=[ 118], 20.00th=[ 126], 00:11:41.412 | 30.00th=[ 133], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:11:41.412 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 182], 00:11:41.412 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 223], 99.95th=[ 225], 00:11:41.412 | 99.99th=[ 233] 00:11:41.412 bw ( KiB/s): min=12432, max=12432, per=24.31%, avg=12432.00, stdev= 0.00, samples=1 00:11:41.412 iops : min= 3108, max= 3108, avg=3108.00, stdev= 0.00, samples=1 00:11:41.412 lat (usec) : 100=1.55%, 250=98.45% 00:11:41.412 cpu : usr=3.80%, sys=9.00%, ctx=5989, majf=0, minf=1 00:11:41.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:41.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.413 issued rwts: total=2917,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:41.413 job2: (groupid=0, jobs=1): err= 0: pid=1578721: Thu Nov 28 10:44:19 2024 00:11:41.413 read: IOPS=2766, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:11:41.413 slat (nsec): min=8601, max=43867, avg=9814.15, stdev=2098.08 00:11:41.413 clat (usec): min=90, max=248, avg=162.15, stdev=22.21 00:11:41.413 lat (usec): min=103, max=258, avg=171.96, stdev=22.55 00:11:41.413 clat percentiles (usec): 00:11:41.413 | 1.00th=[ 101], 5.00th=[ 115], 10.00th=[ 139], 20.00th=[ 149], 00:11:41.413 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:11:41.413 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 200], 00:11:41.413 | 99.00th=[ 223], 99.50th=[ 225], 99.90th=[ 233], 99.95th=[ 239], 00:11:41.413 | 99.99th=[ 249] 00:11:41.413 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:41.413 slat (nsec): min=10750, max=39481, avg=12164.44, stdev=2253.40 00:11:41.413 clat (usec): min=75, max=241, avg=153.60, stdev=24.37 00:11:41.413 lat (usec): min=90, max=254, avg=165.77, stdev=24.61 00:11:41.413 clat percentiles (usec): 00:11:41.413 | 1.00th=[ 94], 5.00th=[ 102], 10.00th=[ 127], 20.00th=[ 137], 00:11:41.413 | 30.00th=[ 145], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:11:41.413 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 194], 00:11:41.413 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 233], 99.95th=[ 233], 00:11:41.413 | 99.99th=[ 241] 00:11:41.413 bw ( KiB/s): min=12288, max=12288, per=24.02%, avg=12288.00, stdev= 0.00, samples=1 00:11:41.413 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:41.413 lat (usec) : 100=2.62%, 250=97.38% 00:11:41.413 cpu : usr=4.90%, sys=7.80%, ctx=5841, majf=0, minf=1 00:11:41.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:41.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.413 issued rwts: total=2769,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:41.413 job3: (groupid=0, jobs=1): err= 0: pid=1578725: Thu Nov 28 10:44:19 2024 00:11:41.413 read: IOPS=2919, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec) 00:11:41.413 slat (nsec): min=8552, max=31878, avg=9242.63, stdev=895.73 00:11:41.413 clat (usec): min=86, max=236, avg=160.60, stdev=23.78 00:11:41.413 lat (usec): min=95, max=245, avg=169.84, stdev=23.78 00:11:41.413 clat percentiles (usec): 00:11:41.413 | 1.00th=[ 100], 5.00th=[ 114], 10.00th=[ 128], 20.00th=[ 139], 00:11:41.413 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:11:41.413 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:11:41.413 | 99.00th=[ 217], 99.50th=[ 223], 99.90th=[ 233], 99.95th=[ 237], 00:11:41.413 | 99.99th=[ 237] 00:11:41.413 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:41.413 slat (nsec): min=10654, max=44536, avg=11590.07, stdev=1199.85 00:11:41.413 clat (usec): min=82, max=227, avg=147.93, stdev=25.28 00:11:41.413 lat (usec): min=94, max=238, avg=159.52, stdev=25.22 00:11:41.413 clat percentiles (usec): 00:11:41.413 | 1.00th=[ 94], 5.00th=[ 102], 10.00th=[ 116], 20.00th=[ 123], 00:11:41.413 | 30.00th=[ 131], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 157], 00:11:41.413 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 188], 00:11:41.413 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 221], 99.95th=[ 223], 00:11:41.413 | 99.99th=[ 227] 00:11:41.413 bw ( KiB/s): min=12472, max=12472, per=24.38%, avg=12472.00, stdev= 0.00, samples=1 00:11:41.413 iops : min= 3118, max= 3118, avg=3118.00, stdev= 0.00, samples=1 00:11:41.413 lat (usec) : 100=2.55%, 250=97.45% 00:11:41.413 cpu : usr=4.10%, sys=8.80%, ctx=5994, majf=0, minf=2 00:11:41.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:41.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.413 issued rwts: total=2922,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:41.413 00:11:41.413 Run status group 0 (all jobs): 00:11:41.413 READ: bw=47.3MiB/s (49.6MB/s), 10.8MiB/s-13.7MiB/s (11.3MB/s-14.4MB/s), io=47.4MiB (49.7MB), run=1001-1001msec 00:11:41.413 WRITE: bw=49.9MiB/s (52.4MB/s), 12.0MiB/s-14.0MiB/s (12.6MB/s-14.7MB/s), io=50.0MiB (52.4MB), run=1001-1001msec 00:11:41.413 00:11:41.413 Disk stats (read/write): 00:11:41.413 nvme0n1: ios=3106/3072, merge=0/0, ticks=378/341, in_queue=719, util=84.37% 00:11:41.413 nvme0n2: ios=2453/2560, merge=0/0, ticks=363/348, in_queue=711, util=85.26% 00:11:41.413 nvme0n3: ios=2306/2560, merge=0/0, ticks=341/371, in_queue=712, util=88.43% 00:11:41.413 nvme0n4: ios=2459/2560, merge=0/0, ticks=366/338, in_queue=704, util=89.47% 00:11:41.413 10:44:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:41.413 [global] 00:11:41.413 thread=1 00:11:41.413 invalidate=1 00:11:41.413 rw=randwrite 00:11:41.413 time_based=1 00:11:41.413 runtime=1 00:11:41.413 ioengine=libaio 00:11:41.413 direct=1 00:11:41.413 bs=4096 00:11:41.413 iodepth=1 00:11:41.413 norandommap=0 00:11:41.413 numjobs=1 00:11:41.413 00:11:41.413 verify_dump=1 00:11:41.413 verify_backlog=512 00:11:41.413 verify_state_save=0 00:11:41.413 do_verify=1 00:11:41.413 verify=crc32c-intel 00:11:41.413 [job0] 00:11:41.413 filename=/dev/nvme0n1 00:11:41.413 [job1] 00:11:41.413 filename=/dev/nvme0n2 00:11:41.413 [job2] 00:11:41.413 filename=/dev/nvme0n3 00:11:41.413 [job3] 00:11:41.413 filename=/dev/nvme0n4 00:11:41.413 Could not set queue depth (nvme0n1) 00:11:41.413 Could not set queue depth (nvme0n2) 00:11:41.413 Could not set queue depth (nvme0n3) 00:11:41.413 Could not set queue depth (nvme0n4) 00:11:41.680 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.680 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.680 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.680 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.680 fio-3.35 00:11:41.680 Starting 4 threads 00:11:43.092 00:11:43.092 job0: (groupid=0, jobs=1): err= 0: pid=1579115: Thu Nov 28 10:44:21 2024 00:11:43.092 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:11:43.092 slat (nsec): min=8269, max=22971, avg=8738.06, stdev=702.83 00:11:43.092 clat (usec): min=70, max=117, avg=84.85, stdev= 5.57 00:11:43.092 lat (usec): min=79, max=126, avg=93.58, stdev= 5.62 00:11:43.092 clat percentiles (usec): 00:11:43.092 | 1.00th=[ 76], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 81], 00:11:43.092 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 86], 00:11:43.092 | 70.00th=[ 87], 80.00th=[ 89], 90.00th=[ 92], 95.00th=[ 95], 00:11:43.092 | 99.00th=[ 103], 99.50th=[ 105], 99.90th=[ 113], 99.95th=[ 117], 00:11:43.092 | 99.99th=[ 118] 00:11:43.092 write: IOPS=5369, BW=21.0MiB/s (22.0MB/s)(21.0MiB/1001msec); 0 zone resets 00:11:43.092 slat (nsec): min=10053, max=71021, avg=11127.60, stdev=1309.97 00:11:43.092 clat (usec): min=66, max=139, avg=80.66, stdev= 5.60 00:11:43.092 lat (usec): min=77, max=175, avg=91.79, stdev= 5.81 00:11:43.092 clat percentiles (usec): 00:11:43.092 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:11:43.092 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 82], 00:11:43.092 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 91], 00:11:43.092 | 99.00th=[ 98], 99.50th=[ 101], 99.90th=[ 106], 99.95th=[ 110], 00:11:43.092 | 99.99th=[ 141] 00:11:43.092 bw ( KiB/s): min=21568, max=21568, per=31.03%, avg=21568.00, stdev= 0.00, samples=1 00:11:43.092 iops : min= 5392, max= 5392, avg=5392.00, stdev= 0.00, samples=1 00:11:43.092 lat (usec) : 100=98.79%, 250=1.21% 00:11:43.092 cpu : usr=9.50%, sys=12.60%, ctx=10496, majf=0, minf=1 00:11:43.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.092 issued rwts: total=5120,5375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.092 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.092 job1: (groupid=0, jobs=1): err= 0: pid=1579129: Thu Nov 28 10:44:21 2024 00:11:43.092 read: IOPS=3412, BW=13.3MiB/s (14.0MB/s)(13.3MiB/1001msec) 00:11:43.092 slat (nsec): min=8337, max=18645, avg=9046.76, stdev=787.89 00:11:43.092 clat (usec): min=74, max=389, avg=135.49, stdev=13.18 00:11:43.092 lat (usec): min=83, max=398, avg=144.53, stdev=13.19 00:11:43.092 clat percentiles (usec): 00:11:43.092 | 1.00th=[ 97], 5.00th=[ 118], 10.00th=[ 123], 20.00th=[ 129], 00:11:43.092 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:11:43.093 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 153], 00:11:43.093 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 204], 99.95th=[ 212], 00:11:43.093 | 99.99th=[ 392] 00:11:43.093 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:43.093 slat (nsec): min=10104, max=43318, avg=11012.18, stdev=1152.47 00:11:43.093 clat (usec): min=70, max=200, avg=125.70, stdev=13.47 00:11:43.093 lat (usec): min=81, max=214, avg=136.72, stdev=13.48 00:11:43.093 clat percentiles (usec): 00:11:43.093 | 1.00th=[ 86], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 119], 00:11:43.093 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:11:43.093 | 70.00th=[ 131], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:11:43.093 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 198], 00:11:43.093 | 99.99th=[ 200] 00:11:43.093 bw ( KiB/s): min=15792, max=15792, per=22.72%, avg=15792.00, stdev= 0.00, samples=1 00:11:43.093 iops : min= 3948, max= 3948, avg=3948.00, stdev= 0.00, samples=1 00:11:43.093 lat (usec) : 100=2.11%, 250=97.87%, 500=0.01% 00:11:43.093 cpu : usr=4.80%, sys=9.90%, ctx=7000, majf=0, minf=2 00:11:43.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.093 issued rwts: total=3416,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.093 job2: (groupid=0, jobs=1): err= 0: pid=1579143: Thu Nov 28 10:44:21 2024 00:11:43.093 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:11:43.093 slat (nsec): min=8544, max=24600, avg=9052.17, stdev=804.18 00:11:43.093 clat (usec): min=80, max=138, avg=95.47, stdev= 6.26 00:11:43.093 lat (usec): min=89, max=147, avg=104.52, stdev= 6.32 00:11:43.093 clat percentiles (usec): 00:11:43.093 | 1.00th=[ 85], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 91], 00:11:43.093 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 95], 60.00th=[ 96], 00:11:43.093 | 70.00th=[ 98], 80.00th=[ 100], 90.00th=[ 104], 95.00th=[ 108], 00:11:43.093 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 125], 99.95th=[ 128], 00:11:43.093 | 99.99th=[ 139] 00:11:43.093 write: IOPS=4844, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1001msec); 0 zone resets 00:11:43.093 slat (nsec): min=10330, max=38956, avg=11316.88, stdev=968.32 00:11:43.093 clat (usec): min=75, max=126, avg=90.57, stdev= 6.18 00:11:43.093 lat (usec): min=86, max=151, avg=101.89, stdev= 6.29 00:11:43.093 clat percentiles (usec): 00:11:43.093 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 84], 20.00th=[ 86], 00:11:43.093 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 91], 00:11:43.093 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 102], 00:11:43.093 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 122], 99.95th=[ 124], 00:11:43.093 | 99.99th=[ 127] 00:11:43.093 bw ( KiB/s): min=20480, max=20480, per=29.47%, avg=20480.00, stdev= 0.00, samples=1 00:11:43.093 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:43.093 lat (usec) : 100=85.88%, 250=14.12% 00:11:43.093 cpu : usr=8.30%, sys=11.90%, ctx=9457, majf=0, minf=2 00:11:43.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.093 issued rwts: total=4608,4849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.093 job3: (groupid=0, jobs=1): err= 0: pid=1579144: Thu Nov 28 10:44:21 2024 00:11:43.093 read: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(13.3MiB/1001msec) 00:11:43.093 slat (nsec): min=8649, max=21938, avg=9260.38, stdev=713.30 00:11:43.093 clat (usec): min=89, max=390, avg=135.20, stdev=11.64 00:11:43.093 lat (usec): min=98, max=399, avg=144.46, stdev=11.61 00:11:43.093 clat percentiles (usec): 00:11:43.093 | 1.00th=[ 105], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 128], 00:11:43.093 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:11:43.093 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 151], 00:11:43.093 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 208], 00:11:43.093 | 99.99th=[ 392] 00:11:43.093 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:43.093 slat (nsec): min=10320, max=40852, avg=11251.63, stdev=1106.77 00:11:43.093 clat (usec): min=69, max=191, avg=125.61, stdev=11.27 00:11:43.093 lat (usec): min=86, max=202, avg=136.86, stdev=11.30 00:11:43.093 clat percentiles (usec): 00:11:43.093 | 1.00th=[ 94], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 119], 00:11:43.093 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:11:43.093 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 143], 00:11:43.093 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 188], 00:11:43.093 | 99.99th=[ 192] 00:11:43.093 bw ( KiB/s): min=15784, max=15784, per=22.71%, avg=15784.00, stdev= 0.00, samples=1 00:11:43.093 iops : min= 3946, max= 3946, avg=3946.00, stdev= 0.00, samples=1 00:11:43.093 lat (usec) : 100=1.20%, 250=98.79%, 500=0.01% 00:11:43.093 cpu : usr=5.00%, sys=9.90%, ctx=6998, majf=0, minf=1 00:11:43.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.093 issued rwts: total=3414,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.093 00:11:43.093 Run status group 0 (all jobs): 00:11:43.093 READ: bw=64.6MiB/s (67.8MB/s), 13.3MiB/s-20.0MiB/s (14.0MB/s-20.9MB/s), io=64.7MiB (67.8MB), run=1001-1001msec 00:11:43.093 WRITE: bw=67.9MiB/s (71.2MB/s), 14.0MiB/s-21.0MiB/s (14.7MB/s-22.0MB/s), io=67.9MiB (71.2MB), run=1001-1001msec 00:11:43.093 00:11:43.093 Disk stats (read/write): 00:11:43.093 nvme0n1: ios=4205/4608, merge=0/0, ticks=349/331, in_queue=680, util=84.65% 00:11:43.093 nvme0n2: ios=2774/3072, merge=0/0, ticks=358/357, in_queue=715, util=85.29% 00:11:43.093 nvme0n3: ios=3800/4096, merge=0/0, ticks=333/324, in_queue=657, util=88.36% 00:11:43.093 nvme0n4: ios=2773/3072, merge=0/0, ticks=339/356, in_queue=695, util=89.50% 00:11:43.093 10:44:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:43.093 [global] 00:11:43.093 thread=1 00:11:43.093 invalidate=1 00:11:43.093 rw=write 00:11:43.093 time_based=1 00:11:43.093 runtime=1 00:11:43.093 ioengine=libaio 00:11:43.093 direct=1 00:11:43.093 bs=4096 00:11:43.093 iodepth=128 00:11:43.093 norandommap=0 00:11:43.093 numjobs=1 00:11:43.093 00:11:43.093 verify_dump=1 00:11:43.093 verify_backlog=512 00:11:43.093 verify_state_save=0 00:11:43.093 do_verify=1 00:11:43.093 verify=crc32c-intel 00:11:43.093 [job0] 00:11:43.093 filename=/dev/nvme0n1 00:11:43.093 [job1] 00:11:43.093 filename=/dev/nvme0n2 00:11:43.093 [job2] 00:11:43.093 filename=/dev/nvme0n3 00:11:43.093 [job3] 00:11:43.093 filename=/dev/nvme0n4 00:11:43.093 Could not set queue depth (nvme0n1) 00:11:43.093 Could not set queue depth (nvme0n2) 00:11:43.093 Could not set queue depth (nvme0n3) 00:11:43.093 Could not set queue depth (nvme0n4) 00:11:43.352 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:43.352 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:43.352 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:43.352 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:43.352 fio-3.35 00:11:43.352 Starting 4 threads 00:11:44.749 00:11:44.750 job0: (groupid=0, jobs=1): err= 0: pid=1579562: Thu Nov 28 10:44:23 2024 00:11:44.750 read: IOPS=3664, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1004msec) 00:11:44.750 slat (nsec): min=1979, max=5013.4k, avg=125099.14, stdev=556611.53 00:11:44.750 clat (usec): min=3176, max=21107, avg=16634.20, stdev=4664.63 00:11:44.750 lat (usec): min=3383, max=21112, avg=16759.30, stdev=4674.61 00:11:44.750 clat percentiles (usec): 00:11:44.750 | 1.00th=[ 3949], 5.00th=[ 5211], 10.00th=[ 8225], 20.00th=[13304], 00:11:44.750 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18482], 60.00th=[18744], 00:11:44.750 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20055], 95.00th=[20317], 00:11:44.750 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:11:44.750 | 99.99th=[21103] 00:11:44.750 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:11:44.750 slat (usec): min=2, max=4002, avg=118.81, stdev=526.40 00:11:44.750 clat (usec): min=2151, max=23171, avg=16167.18, stdev=5128.36 00:11:44.750 lat (usec): min=2160, max=23174, avg=16285.99, stdev=5148.14 00:11:44.750 clat percentiles (usec): 00:11:44.750 | 1.00th=[ 3425], 5.00th=[ 4883], 10.00th=[ 7373], 20.00th=[ 9634], 00:11:44.750 | 30.00th=[17433], 40.00th=[18220], 50.00th=[18744], 60.00th=[19006], 00:11:44.750 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20055], 95.00th=[20317], 00:11:44.750 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20841], 99.95th=[23200], 00:11:44.750 | 99.99th=[23200] 00:11:44.750 bw ( KiB/s): min=14040, max=18472, per=20.07%, avg=16256.00, stdev=3133.90, samples=2 00:11:44.750 iops : min= 3510, max= 4618, avg=4064.00, stdev=783.47, samples=2 00:11:44.750 lat (msec) : 4=1.59%, 10=17.50%, 20=68.36%, 50=12.54% 00:11:44.750 cpu : usr=2.69%, sys=5.18%, ctx=2333, majf=0, minf=1 00:11:44.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:44.750 issued rwts: total=3679,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:44.750 job1: (groupid=0, jobs=1): err= 0: pid=1579574: Thu Nov 28 10:44:23 2024 00:11:44.750 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:11:44.750 slat (nsec): min=1974, max=8339.9k, avg=90361.66, stdev=376950.66 00:11:44.750 clat (usec): min=4905, max=24622, avg=11754.94, stdev=6310.43 00:11:44.750 lat (usec): min=5259, max=26086, avg=11845.30, stdev=6353.43 00:11:44.750 clat percentiles (usec): 00:11:44.750 | 1.00th=[ 5276], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 5997], 00:11:44.750 | 30.00th=[ 6128], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[16909], 00:11:44.750 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19268], 95.00th=[19530], 00:11:44.750 | 99.00th=[20055], 99.50th=[21103], 99.90th=[22676], 99.95th=[23200], 00:11:44.750 | 99.99th=[24511] 00:11:44.750 write: IOPS=5715, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1002msec); 0 zone resets 00:11:44.750 slat (usec): min=2, max=2812, avg=80.69, stdev=311.73 00:11:44.750 clat (usec): min=1122, max=19212, avg=10591.53, stdev=5945.10 00:11:44.750 lat (usec): min=1125, max=19217, avg=10672.23, stdev=5982.01 00:11:44.750 clat percentiles (usec): 00:11:44.750 | 1.00th=[ 3064], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5604], 00:11:44.750 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[10552], 00:11:44.750 | 70.00th=[17695], 80.00th=[18220], 90.00th=[18482], 95.00th=[18744], 00:11:44.750 | 99.00th=[19006], 99.50th=[19006], 99.90th=[19268], 99.95th=[19268], 00:11:44.750 | 99.99th=[19268] 00:11:44.750 bw ( KiB/s): min=16384, max=16384, per=20.23%, avg=16384.00, stdev= 0.00, samples=1 00:11:44.750 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:44.750 lat (msec) : 2=0.16%, 4=0.47%, 10=56.25%, 20=42.69%, 50=0.43% 00:11:44.750 cpu : usr=3.30%, sys=6.09%, ctx=1671, majf=0, minf=1 00:11:44.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:44.750 issued rwts: total=5632,5727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:44.750 job2: (groupid=0, jobs=1): err= 0: pid=1579575: Thu Nov 28 10:44:23 2024 00:11:44.750 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:11:44.750 slat (nsec): min=1983, max=8915.7k, avg=73356.96, stdev=286775.95 00:11:44.750 clat (usec): min=5832, max=29122, avg=9711.64, stdev=5030.33 00:11:44.750 lat (usec): min=6374, max=29133, avg=9785.00, stdev=5065.38 00:11:44.750 clat percentiles (usec): 00:11:44.750 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7111], 00:11:44.750 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7439], 00:11:44.750 | 70.00th=[ 7570], 80.00th=[ 8160], 90.00th=[20055], 95.00th=[20317], 00:11:44.750 | 99.00th=[21627], 99.50th=[22414], 99.90th=[23200], 99.95th=[23200], 00:11:44.750 | 99.99th=[29230] 00:11:44.750 write: IOPS=6897, BW=26.9MiB/s (28.3MB/s)(27.1MiB/1004msec); 0 zone resets 00:11:44.750 slat (usec): min=2, max=4198, avg=68.43, stdev=245.36 00:11:44.750 clat (usec): min=3108, max=23415, avg=9019.06, stdev=4640.81 00:11:44.750 lat (usec): min=4086, max=23418, avg=9087.49, stdev=4671.69 00:11:44.750 clat percentiles (usec): 00:11:44.750 | 1.00th=[ 5932], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6718], 00:11:44.750 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 6980], 00:11:44.750 | 70.00th=[ 7046], 80.00th=[ 7504], 90.00th=[19268], 95.00th=[20055], 00:11:44.750 | 99.00th=[20579], 99.50th=[21890], 99.90th=[23462], 99.95th=[23462], 00:11:44.750 | 99.99th=[23462] 00:11:44.750 bw ( KiB/s): min=21616, max=32768, per=33.57%, avg=27192.00, stdev=7885.65, samples=2 00:11:44.750 iops : min= 5404, max= 8192, avg=6798.00, stdev=1971.41, samples=2 00:11:44.750 lat (msec) : 4=0.01%, 10=80.78%, 20=11.28%, 50=7.93% 00:11:44.750 cpu : usr=5.88%, sys=6.98%, ctx=1451, majf=0, minf=1 00:11:44.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:44.750 issued rwts: total=6656,6925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:44.750 job3: (groupid=0, jobs=1): err= 0: pid=1579576: Thu Nov 28 10:44:23 2024 00:11:44.750 read: IOPS=3493, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1004msec) 00:11:44.750 slat (usec): min=2, max=3384, avg=143.94, stdev=393.10 00:11:44.750 clat (usec): min=3173, max=23154, avg=18526.03, stdev=2142.82 00:11:44.750 lat (usec): min=5478, max=23167, avg=18669.97, stdev=2118.16 00:11:44.750 clat percentiles (usec): 00:11:44.750 | 1.00th=[10552], 5.00th=[13829], 10.00th=[16450], 20.00th=[18220], 00:11:44.750 | 30.00th=[18744], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:11:44.750 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20317], 95.00th=[20317], 00:11:44.750 | 99.00th=[20579], 99.50th=[21365], 99.90th=[22152], 99.95th=[23200], 00:11:44.750 | 99.99th=[23200] 00:11:44.750 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:11:44.750 slat (usec): min=2, max=3162, avg=132.61, stdev=360.82 00:11:44.750 clat (usec): min=6157, max=21045, avg=17338.58, stdev=3516.82 00:11:44.750 lat (usec): min=6160, max=21049, avg=17471.19, stdev=3528.90 00:11:44.750 clat percentiles (usec): 00:11:44.750 | 1.00th=[ 7046], 5.00th=[ 8225], 10.00th=[ 9241], 20.00th=[17433], 00:11:44.750 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:11:44.750 | 70.00th=[18744], 80.00th=[19792], 90.00th=[20055], 95.00th=[20317], 00:11:44.750 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20841], 99.95th=[21103], 00:11:44.750 | 99.99th=[21103] 00:11:44.750 bw ( KiB/s): min=12288, max=16384, per=17.70%, avg=14336.00, stdev=2896.31, samples=2 00:11:44.751 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:44.751 lat (msec) : 4=0.01%, 10=6.02%, 20=80.36%, 50=13.61% 00:11:44.751 cpu : usr=2.39%, sys=5.28%, ctx=2136, majf=0, minf=1 00:11:44.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:44.751 issued rwts: total=3507,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:44.751 00:11:44.751 Run status group 0 (all jobs): 00:11:44.751 READ: bw=75.8MiB/s (79.4MB/s), 13.6MiB/s-25.9MiB/s (14.3MB/s-27.2MB/s), io=76.1MiB (79.8MB), run=1002-1004msec 00:11:44.751 WRITE: bw=79.1MiB/s (82.9MB/s), 13.9MiB/s-26.9MiB/s (14.6MB/s-28.3MB/s), io=79.4MiB (83.3MB), run=1002-1004msec 00:11:44.751 00:11:44.751 Disk stats (read/write): 00:11:44.751 nvme0n1: ios=3125/3584, merge=0/0, ticks=16191/19106, in_queue=35297, util=84.35% 00:11:44.751 nvme0n2: ios=3584/4091, merge=0/0, ticks=14677/13987, in_queue=28664, util=85.07% 00:11:44.751 nvme0n3: ios=6144/6339, merge=0/0, ticks=16722/16177, in_queue=32899, util=88.23% 00:11:44.751 nvme0n4: ios=2919/3072, merge=0/0, ticks=13685/13059, in_queue=26744, util=89.47% 00:11:44.751 10:44:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:44.751 [global] 00:11:44.751 thread=1 00:11:44.751 invalidate=1 00:11:44.751 rw=randwrite 00:11:44.751 time_based=1 00:11:44.751 runtime=1 00:11:44.751 ioengine=libaio 00:11:44.751 direct=1 00:11:44.751 bs=4096 00:11:44.751 iodepth=128 00:11:44.751 norandommap=0 00:11:44.751 numjobs=1 00:11:44.751 00:11:44.751 verify_dump=1 00:11:44.751 verify_backlog=512 00:11:44.751 verify_state_save=0 00:11:44.751 do_verify=1 00:11:44.751 verify=crc32c-intel 00:11:44.751 [job0] 00:11:44.751 filename=/dev/nvme0n1 00:11:44.751 [job1] 00:11:44.751 filename=/dev/nvme0n2 00:11:44.751 [job2] 00:11:44.751 filename=/dev/nvme0n3 00:11:44.751 [job3] 00:11:44.751 filename=/dev/nvme0n4 00:11:44.751 Could not set queue depth (nvme0n1) 00:11:44.751 Could not set queue depth (nvme0n2) 00:11:44.751 Could not set queue depth (nvme0n3) 00:11:44.751 Could not set queue depth (nvme0n4) 00:11:45.011 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.011 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.011 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.011 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.011 fio-3.35 00:11:45.011 Starting 4 threads 00:11:46.402 00:11:46.402 job0: (groupid=0, jobs=1): err= 0: pid=1579998: Thu Nov 28 10:44:24 2024 00:11:46.402 read: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec) 00:11:46.402 slat (usec): min=2, max=1662, avg=59.37, stdev=211.90 00:11:46.402 clat (usec): min=5833, max=9298, avg=7730.68, stdev=559.58 00:11:46.402 lat (usec): min=5840, max=9494, avg=7790.05, stdev=577.40 00:11:46.402 clat percentiles (usec): 00:11:46.402 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7308], 00:11:46.402 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7832], 00:11:46.402 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8717], 00:11:46.402 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9241], 99.95th=[ 9241], 00:11:46.402 | 99.99th=[ 9241] 00:11:46.402 write: IOPS=8616, BW=33.7MiB/s (35.3MB/s)(33.7MiB/1001msec); 0 zone resets 00:11:46.402 slat (usec): min=2, max=1100, avg=55.89, stdev=197.61 00:11:46.402 clat (usec): min=602, max=9217, avg=7358.18, stdev=715.94 00:11:46.402 lat (usec): min=1487, max=9226, avg=7414.07, stdev=729.19 00:11:46.402 clat percentiles (usec): 00:11:46.402 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 6915], 00:11:46.402 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:11:46.402 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8160], 95.00th=[ 8455], 00:11:46.402 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[ 9110], 99.95th=[ 9241], 00:11:46.402 | 99.99th=[ 9241] 00:11:46.402 bw ( KiB/s): min=35544, max=35544, per=30.06%, avg=35544.00, stdev= 0.00, samples=1 00:11:46.402 iops : min= 8886, max= 8886, avg=8886.00, stdev= 0.00, samples=1 00:11:46.402 lat (usec) : 750=0.01% 00:11:46.402 lat (msec) : 2=0.09%, 4=0.19%, 10=99.71% 00:11:46.402 cpu : usr=3.30%, sys=7.60%, ctx=1140, majf=0, minf=1 00:11:46.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:46.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.402 issued rwts: total=8192,8625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.402 job1: (groupid=0, jobs=1): err= 0: pid=1579999: Thu Nov 28 10:44:24 2024 00:11:46.402 read: IOPS=8213, BW=32.1MiB/s (33.6MB/s)(32.1MiB/1001msec) 00:11:46.402 slat (usec): min=2, max=1678, avg=59.23, stdev=210.40 00:11:46.402 clat (usec): min=494, max=9516, avg=7665.68, stdev=664.37 00:11:46.402 lat (usec): min=1485, max=9525, avg=7724.91, stdev=674.60 00:11:46.402 clat percentiles (usec): 00:11:46.402 | 1.00th=[ 6390], 5.00th=[ 6652], 10.00th=[ 6915], 20.00th=[ 7242], 00:11:46.402 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:11:46.402 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:11:46.402 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[ 9372], 99.95th=[ 9372], 00:11:46.402 | 99.99th=[ 9503] 00:11:46.402 write: IOPS=8695, BW=34.0MiB/s (35.6MB/s)(34.0MiB/1001msec); 0 zone resets 00:11:46.402 slat (usec): min=2, max=1128, avg=55.27, stdev=195.04 00:11:46.402 clat (usec): min=2553, max=9055, avg=7319.85, stdev=626.83 00:11:46.402 lat (usec): min=3330, max=9151, avg=7375.12, stdev=637.82 00:11:46.402 clat percentiles (usec): 00:11:46.402 | 1.00th=[ 5997], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 6849], 00:11:46.402 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:11:46.402 | 70.00th=[ 7570], 80.00th=[ 7832], 90.00th=[ 8160], 95.00th=[ 8356], 00:11:46.402 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 8848], 99.95th=[ 8979], 00:11:46.402 | 99.99th=[ 9110] 00:11:46.402 bw ( KiB/s): min=36352, max=36352, per=30.75%, avg=36352.00, stdev= 0.00, samples=1 00:11:46.402 iops : min= 9088, max= 9088, avg=9088.00, stdev= 0.00, samples=1 00:11:46.402 lat (usec) : 500=0.01% 00:11:46.402 lat (msec) : 2=0.09%, 4=0.19%, 10=99.72% 00:11:46.402 cpu : usr=3.90%, sys=7.00%, ctx=1142, majf=0, minf=1 00:11:46.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:46.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.402 issued rwts: total=8222,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.402 job2: (groupid=0, jobs=1): err= 0: pid=1580000: Thu Nov 28 10:44:24 2024 00:11:46.402 read: IOPS=5931, BW=23.2MiB/s (24.3MB/s)(23.2MiB/1002msec) 00:11:46.402 slat (usec): min=2, max=2022, avg=82.53, stdev=285.46 00:11:46.402 clat (usec): min=927, max=18522, avg=10638.35, stdev=2829.38 00:11:46.402 lat (usec): min=2181, max=18525, avg=10720.88, stdev=2838.67 00:11:46.402 clat percentiles (usec): 00:11:46.402 | 1.00th=[ 5735], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9241], 00:11:46.402 | 30.00th=[ 9503], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9634], 00:11:46.402 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[16581], 95.00th=[17171], 00:11:46.402 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:11:46.402 | 99.99th=[18482] 00:11:46.402 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:11:46.402 slat (usec): min=2, max=1772, avg=78.62, stdev=265.07 00:11:46.402 clat (usec): min=6880, max=18102, avg=10314.56, stdev=2762.07 00:11:46.402 lat (usec): min=7913, max=18135, avg=10393.18, stdev=2774.05 00:11:46.402 clat percentiles (usec): 00:11:46.402 | 1.00th=[ 7701], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8717], 00:11:46.402 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 9110], 00:11:46.402 | 70.00th=[ 9372], 80.00th=[13304], 90.00th=[15795], 95.00th=[16188], 00:11:46.402 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17433], 99.95th=[17695], 00:11:46.402 | 99.99th=[18220] 00:11:46.402 bw ( KiB/s): min=20480, max=20480, per=17.32%, avg=20480.00, stdev= 0.00, samples=1 00:11:46.402 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:46.402 lat (usec) : 1000=0.01% 00:11:46.402 lat (msec) : 4=0.21%, 10=79.43%, 20=20.35% 00:11:46.402 cpu : usr=3.40%, sys=5.69%, ctx=960, majf=0, minf=1 00:11:46.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:46.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.402 issued rwts: total=5943,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.402 job3: (groupid=0, jobs=1): err= 0: pid=1580001: Thu Nov 28 10:44:24 2024 00:11:46.402 read: IOPS=5764, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1002msec) 00:11:46.402 slat (usec): min=2, max=2185, avg=83.31, stdev=278.71 00:11:46.403 clat (usec): min=593, max=18577, avg=10792.32, stdev=2885.37 00:11:46.403 lat (usec): min=1756, max=19052, avg=10875.63, stdev=2909.84 00:11:46.403 clat percentiles (usec): 00:11:46.403 | 1.00th=[ 5604], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:11:46.403 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:11:46.403 | 70.00th=[10290], 80.00th=[10945], 90.00th=[16712], 95.00th=[17171], 00:11:46.403 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:11:46.403 | 99.99th=[18482] 00:11:46.403 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:11:46.403 slat (usec): min=2, max=2219, avg=80.22, stdev=264.63 00:11:46.403 clat (usec): min=5510, max=17937, avg=10456.55, stdev=2630.42 00:11:46.403 lat (usec): min=5932, max=17959, avg=10536.77, stdev=2653.44 00:11:46.403 clat percentiles (usec): 00:11:46.403 | 1.00th=[ 7898], 5.00th=[ 8455], 10.00th=[ 8586], 20.00th=[ 8848], 00:11:46.403 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9634], 00:11:46.403 | 70.00th=[10028], 80.00th=[11994], 90.00th=[15664], 95.00th=[16188], 00:11:46.403 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:11:46.403 | 99.99th=[17957] 00:11:46.403 bw ( KiB/s): min=20480, max=20480, per=17.32%, avg=20480.00, stdev= 0.00, samples=1 00:11:46.403 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:46.403 lat (usec) : 750=0.01% 00:11:46.403 lat (msec) : 2=0.13%, 4=0.14%, 10=65.77%, 20=33.95% 00:11:46.403 cpu : usr=2.80%, sys=6.19%, ctx=1023, majf=0, minf=1 00:11:46.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:46.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.403 issued rwts: total=5776,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.403 00:11:46.403 Run status group 0 (all jobs): 00:11:46.403 READ: bw=110MiB/s (115MB/s), 22.5MiB/s-32.1MiB/s (23.6MB/s-33.6MB/s), io=110MiB (115MB), run=1001-1002msec 00:11:46.403 WRITE: bw=115MiB/s (121MB/s), 24.0MiB/s-34.0MiB/s (25.1MB/s-35.6MB/s), io=116MiB (121MB), run=1001-1002msec 00:11:46.403 00:11:46.403 Disk stats (read/write): 00:11:46.403 nvme0n1: ios=6971/7168, merge=0/0, ticks=13069/12761, in_queue=25830, util=84.85% 00:11:46.403 nvme0n2: ios=7015/7168, merge=0/0, ticks=13165/12623, in_queue=25788, util=85.39% 00:11:46.403 nvme0n3: ios=4638/5120, merge=0/0, ticks=16695/17469, in_queue=34164, util=88.47% 00:11:46.403 nvme0n4: ios=4608/5038, merge=0/0, ticks=14172/14813, in_queue=28985, util=89.41% 00:11:46.403 10:44:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:46.403 10:44:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1580107 00:11:46.403 10:44:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:46.403 10:44:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:46.403 [global] 00:11:46.403 thread=1 00:11:46.403 invalidate=1 00:11:46.403 rw=read 00:11:46.403 time_based=1 00:11:46.403 runtime=10 00:11:46.403 ioengine=libaio 00:11:46.403 direct=1 00:11:46.403 bs=4096 00:11:46.403 iodepth=1 00:11:46.403 norandommap=1 00:11:46.403 numjobs=1 00:11:46.403 00:11:46.403 [job0] 00:11:46.403 filename=/dev/nvme0n1 00:11:46.403 [job1] 00:11:46.403 filename=/dev/nvme0n2 00:11:46.403 [job2] 00:11:46.403 filename=/dev/nvme0n3 00:11:46.403 [job3] 00:11:46.403 filename=/dev/nvme0n4 00:11:46.403 Could not set queue depth (nvme0n1) 00:11:46.403 Could not set queue depth (nvme0n2) 00:11:46.403 Could not set queue depth (nvme0n3) 00:11:46.403 Could not set queue depth (nvme0n4) 00:11:46.403 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.403 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.403 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.403 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.403 fio-3.35 00:11:46.403 Starting 4 threads 00:11:49.698 10:44:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:49.698 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=59215872, buflen=4096 00:11:49.698 fio: pid=1580427, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:49.698 10:44:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:49.698 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=98357248, buflen=4096 00:11:49.698 fio: pid=1580426, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:49.698 10:44:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:49.698 10:44:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:49.698 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14278656, buflen=4096 00:11:49.698 fio: pid=1580423, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:49.958 10:44:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:49.958 10:44:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:50.216 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=26996736, buflen=4096 00:11:50.216 fio: pid=1580425, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:50.216 00:11:50.216 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1580423: Thu Nov 28 10:44:28 2024 00:11:50.216 read: IOPS=6566, BW=25.6MiB/s (26.9MB/s)(77.6MiB/3026msec) 00:11:50.216 slat (usec): min=8, max=27888, avg=12.13, stdev=245.15 00:11:50.216 clat (usec): min=54, max=290, avg=137.75, stdev=47.37 00:11:50.216 lat (usec): min=63, max=27993, avg=149.88, stdev=249.23 00:11:50.216 clat percentiles (usec): 00:11:50.216 | 1.00th=[ 64], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 85], 00:11:50.216 | 30.00th=[ 89], 40.00th=[ 104], 50.00th=[ 159], 60.00th=[ 165], 00:11:50.216 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 196], 95.00th=[ 210], 00:11:50.216 | 99.00th=[ 227], 99.50th=[ 233], 99.90th=[ 245], 99.95th=[ 273], 00:11:50.216 | 99.99th=[ 277] 00:11:50.217 bw ( KiB/s): min=21624, max=33232, per=25.41%, avg=24299.80, stdev=5002.09, samples=5 00:11:50.217 iops : min= 5406, max= 8308, avg=6074.80, stdev=1250.60, samples=5 00:11:50.217 lat (usec) : 100=39.32%, 250=60.60%, 500=0.08% 00:11:50.217 cpu : usr=3.47%, sys=8.99%, ctx=19875, majf=0, minf=1 00:11:50.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.217 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.217 issued rwts: total=19871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.217 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1580425: Thu Nov 28 10:44:28 2024 00:11:50.217 read: IOPS=6755, BW=26.4MiB/s (27.7MB/s)(89.7MiB/3401msec) 00:11:50.217 slat (usec): min=3, max=13810, avg=11.05, stdev=172.80 00:11:50.217 clat (usec): min=42, max=22379, avg=134.68, stdev=208.69 00:11:50.217 lat (usec): min=53, max=22388, avg=145.73, stdev=270.69 00:11:50.217 clat percentiles (usec): 00:11:50.217 | 1.00th=[ 55], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 72], 00:11:50.217 | 30.00th=[ 97], 40.00th=[ 127], 50.00th=[ 153], 60.00th=[ 163], 00:11:50.217 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:11:50.217 | 99.00th=[ 217], 99.50th=[ 227], 99.90th=[ 255], 99.95th=[ 285], 00:11:50.217 | 99.99th=[ 611] 00:11:50.217 bw ( KiB/s): min=22328, max=29079, per=25.37%, avg=24259.83, stdev=2583.51, samples=6 00:11:50.217 iops : min= 5582, max= 7269, avg=6064.83, stdev=645.60, samples=6 00:11:50.217 lat (usec) : 50=0.02%, 100=30.65%, 250=69.21%, 500=0.10%, 750=0.01% 00:11:50.217 lat (msec) : 50=0.01% 00:11:50.217 cpu : usr=2.47%, sys=8.82%, ctx=22983, majf=0, minf=2 00:11:50.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.217 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.217 issued rwts: total=22976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.217 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1580426: Thu Nov 28 10:44:28 2024 00:11:50.217 read: IOPS=8509, BW=33.2MiB/s (34.9MB/s)(93.8MiB/2822msec) 00:11:50.217 slat (usec): min=8, max=16920, avg=10.02, stdev=119.78 00:11:50.217 clat (usec): min=73, max=572, avg=105.07, stdev=20.17 00:11:50.217 lat (usec): min=89, max=17030, avg=115.10, stdev=121.51 00:11:50.217 clat percentiles (usec): 00:11:50.217 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 92], 20.00th=[ 94], 00:11:50.217 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 102], 00:11:50.217 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 122], 95.00th=[ 153], 00:11:50.217 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 221], 99.95th=[ 241], 00:11:50.217 | 99.99th=[ 285] 00:11:50.217 bw ( KiB/s): min=30136, max=36688, per=36.39%, avg=34798.40, stdev=2853.70, samples=5 00:11:50.217 iops : min= 7534, max= 9172, avg=8699.60, stdev=713.42, samples=5 00:11:50.217 lat (usec) : 100=51.95%, 250=48.02%, 500=0.02%, 750=0.01% 00:11:50.217 cpu : usr=4.89%, sys=11.13%, ctx=24017, majf=0, minf=2 00:11:50.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.217 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.217 issued rwts: total=24014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.217 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1580427: Thu Nov 28 10:44:28 2024 00:11:50.217 read: IOPS=5514, BW=21.5MiB/s (22.6MB/s)(56.5MiB/2622msec) 00:11:50.217 slat (nsec): min=8277, max=38194, avg=9234.69, stdev=870.81 00:11:50.217 clat (usec): min=80, max=529, avg=169.07, stdev=23.89 00:11:50.217 lat (usec): min=89, max=539, avg=178.30, stdev=23.90 00:11:50.217 clat percentiles (usec): 00:11:50.217 | 1.00th=[ 100], 5.00th=[ 128], 10.00th=[ 139], 20.00th=[ 157], 00:11:50.217 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:11:50.217 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 202], 95.00th=[ 212], 00:11:50.217 | 99.00th=[ 227], 99.50th=[ 233], 99.90th=[ 243], 99.95th=[ 253], 00:11:50.217 | 99.99th=[ 281] 00:11:50.217 bw ( KiB/s): min=21640, max=22568, per=23.18%, avg=22168.00, stdev=361.91, samples=5 00:11:50.217 iops : min= 5410, max= 5642, avg=5542.00, stdev=90.48, samples=5 00:11:50.217 lat (usec) : 100=1.00%, 250=98.93%, 500=0.06%, 750=0.01% 00:11:50.217 cpu : usr=2.63%, sys=8.01%, ctx=14458, majf=0, minf=2 00:11:50.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.217 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.217 issued rwts: total=14458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.217 00:11:50.217 Run status group 0 (all jobs): 00:11:50.217 READ: bw=93.4MiB/s (97.9MB/s), 21.5MiB/s-33.2MiB/s (22.6MB/s-34.9MB/s), io=318MiB (333MB), run=2622-3401msec 00:11:50.217 00:11:50.217 Disk stats (read/write): 00:11:50.217 nvme0n1: ios=18000/0, merge=0/0, ticks=2407/0, in_queue=2407, util=92.65% 00:11:50.217 nvme0n2: ios=22030/0, merge=0/0, ticks=2857/0, in_queue=2857, util=93.63% 00:11:50.217 nvme0n3: ios=24013/0, merge=0/0, ticks=2321/0, in_queue=2321, util=95.41% 00:11:50.217 nvme0n4: ios=14203/0, merge=0/0, ticks=2296/0, in_queue=2296, util=96.38% 00:11:50.217 10:44:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:50.217 10:44:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:50.786 10:44:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:50.786 10:44:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:51.046 10:44:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.046 10:44:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:51.304 10:44:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.304 10:44:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:51.871 10:44:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.871 10:44:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:52.130 10:44:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:52.130 10:44:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1580107 00:11:52.130 10:44:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:52.130 10:44:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.067 10:44:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.067 10:44:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.067 10:44:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.067 10:44:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.067 10:44:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.067 10:44:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.067 10:44:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:53.067 10:44:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:53.068 10:44:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:53.068 nvmf hotplug test: fio failed as expected 00:11:53.068 10:44:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:53.327 rmmod nvme_rdma 00:11:53.327 rmmod nvme_fabrics 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1576925 ']' 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1576925 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1576925 ']' 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1576925 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1576925 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1576925' 00:11:53.327 killing process with pid 1576925 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1576925 00:11:53.327 10:44:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1576925 00:11:55.246 10:44:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.246 10:44:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:55.246 00:11:55.247 real 0m31.880s 00:11:55.247 user 2m18.497s 00:11:55.247 sys 0m11.599s 00:11:55.247 10:44:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.247 10:44:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.247 ************************************ 00:11:55.247 END TEST nvmf_fio_target 00:11:55.247 ************************************ 00:11:55.247 10:44:33 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:55.247 10:44:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.247 10:44:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.247 10:44:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:55.247 ************************************ 00:11:55.247 START TEST nvmf_bdevio 00:11:55.247 ************************************ 00:11:55.247 10:44:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:55.247 * Looking for test storage... 00:11:55.247 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:55.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.247 --rc genhtml_branch_coverage=1 00:11:55.247 --rc genhtml_function_coverage=1 00:11:55.247 --rc genhtml_legend=1 00:11:55.247 --rc geninfo_all_blocks=1 00:11:55.247 --rc geninfo_unexecuted_blocks=1 00:11:55.247 00:11:55.247 ' 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:55.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.247 --rc genhtml_branch_coverage=1 00:11:55.247 --rc genhtml_function_coverage=1 00:11:55.247 --rc genhtml_legend=1 00:11:55.247 --rc geninfo_all_blocks=1 00:11:55.247 --rc geninfo_unexecuted_blocks=1 00:11:55.247 00:11:55.247 ' 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:55.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.247 --rc genhtml_branch_coverage=1 00:11:55.247 --rc genhtml_function_coverage=1 00:11:55.247 --rc genhtml_legend=1 00:11:55.247 --rc geninfo_all_blocks=1 00:11:55.247 --rc geninfo_unexecuted_blocks=1 00:11:55.247 00:11:55.247 ' 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:55.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.247 --rc genhtml_branch_coverage=1 00:11:55.247 --rc genhtml_function_coverage=1 00:11:55.247 --rc genhtml_legend=1 00:11:55.247 --rc geninfo_all_blocks=1 00:11:55.247 --rc geninfo_unexecuted_blocks=1 00:11:55.247 00:11:55.247 ' 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:55.247 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.507 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.507 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.508 10:44:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.639 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:03.639 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:03.640 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:03.640 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:03.640 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:03.640 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:03.640 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:03.640 altname enp217s0f0np0 00:12:03.640 altname ens818f0np0 00:12:03.640 inet 192.168.100.8/24 scope global mlx_0_0 00:12:03.640 valid_lft forever preferred_lft forever 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:03.640 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:03.640 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:03.640 altname enp217s0f1np1 00:12:03.640 altname ens818f1np1 00:12:03.640 inet 192.168.100.9/24 scope global mlx_0_1 00:12:03.640 valid_lft forever preferred_lft forever 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:03.640 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:03.641 192.168.100.9' 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:03.641 192.168.100.9' 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:03.641 192.168.100.9' 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1585733 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1585733 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1585733 ']' 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.641 10:44:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:03.641 [2024-11-28 10:44:41.986227] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:03.641 [2024-11-28 10:44:41.986338] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.641 [2024-11-28 10:44:42.140640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.641 [2024-11-28 10:44:42.243408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.641 [2024-11-28 10:44:42.243457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.641 [2024-11-28 10:44:42.243470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.641 [2024-11-28 10:44:42.243484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.641 [2024-11-28 10:44:42.243494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.641 [2024-11-28 10:44:42.245963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:03.641 [2024-11-28 10:44:42.246055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:03.641 [2024-11-28 10:44:42.246122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.641 [2024-11-28 10:44:42.246147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:03.901 10:44:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.901 10:44:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:03.901 10:44:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:03.901 10:44:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:03.901 10:44:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.161 10:44:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.161 10:44:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:04.161 10:44:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.161 10:44:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.161 [2024-11-28 10:44:42.871149] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f2d1df9a940) succeed. 00:12:04.161 [2024-11-28 10:44:42.881328] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f2d1df56940) succeed. 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.421 Malloc0 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.421 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.422 [2024-11-28 10:44:43.244699] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:04.422 { 00:12:04.422 "params": { 00:12:04.422 "name": "Nvme$subsystem", 00:12:04.422 "trtype": "$TEST_TRANSPORT", 00:12:04.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:04.422 "adrfam": "ipv4", 00:12:04.422 "trsvcid": "$NVMF_PORT", 00:12:04.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:04.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:04.422 "hdgst": ${hdgst:-false}, 00:12:04.422 "ddgst": ${ddgst:-false} 00:12:04.422 }, 00:12:04.422 "method": "bdev_nvme_attach_controller" 00:12:04.422 } 00:12:04.422 EOF 00:12:04.422 )") 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:04.422 10:44:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:04.422 "params": { 00:12:04.422 "name": "Nvme1", 00:12:04.422 "trtype": "rdma", 00:12:04.422 "traddr": "192.168.100.8", 00:12:04.422 "adrfam": "ipv4", 00:12:04.422 "trsvcid": "4420", 00:12:04.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:04.422 "hdgst": false, 00:12:04.422 "ddgst": false 00:12:04.422 }, 00:12:04.422 "method": "bdev_nvme_attach_controller" 00:12:04.422 }' 00:12:04.682 [2024-11-28 10:44:43.325735] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:04.682 [2024-11-28 10:44:43.325819] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585999 ] 00:12:04.682 [2024-11-28 10:44:43.481836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:04.942 [2024-11-28 10:44:43.590206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.942 [2024-11-28 10:44:43.590273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.942 [2024-11-28 10:44:43.590278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.202 I/O targets: 00:12:05.202 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:05.202 00:12:05.202 00:12:05.202 CUnit - A unit testing framework for C - Version 2.1-3 00:12:05.202 http://cunit.sourceforge.net/ 00:12:05.202 00:12:05.202 00:12:05.202 Suite: bdevio tests on: Nvme1n1 00:12:05.202 Test: blockdev write read block ...passed 00:12:05.202 Test: blockdev write zeroes read block ...passed 00:12:05.202 Test: blockdev write zeroes read no split ...passed 00:12:05.202 Test: blockdev write zeroes read split ...passed 00:12:05.202 Test: blockdev write zeroes read split partial ...passed 00:12:05.202 Test: blockdev reset ...[2024-11-28 10:44:44.075763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:05.462 [2024-11-28 10:44:44.111349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:12:05.462 [2024-11-28 10:44:44.145251] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:05.462 passed 00:12:05.462 Test: blockdev write read 8 blocks ...passed 00:12:05.462 Test: blockdev write read size > 128k ...passed 00:12:05.462 Test: blockdev write read invalid size ...passed 00:12:05.462 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:05.462 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:05.462 Test: blockdev write read max offset ...passed 00:12:05.462 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:05.462 Test: blockdev writev readv 8 blocks ...passed 00:12:05.462 Test: blockdev writev readv 30 x 1block ...passed 00:12:05.462 Test: blockdev writev readv block ...passed 00:12:05.462 Test: blockdev writev readv size > 128k ...passed 00:12:05.462 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:05.462 Test: blockdev comparev and writev ...[2024-11-28 10:44:44.150785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.462 [2024-11-28 10:44:44.150825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:05.462 [2024-11-28 10:44:44.150842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.462 [2024-11-28 10:44:44.150858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:05.462 [2024-11-28 10:44:44.151042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.462 [2024-11-28 10:44:44.151060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:05.462 [2024-11-28 10:44:44.151074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.462 [2024-11-28 10:44:44.151088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:05.462 [2024-11-28 10:44:44.151275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.462 [2024-11-28 10:44:44.151295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:05.462 [2024-11-28 10:44:44.151309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.462 [2024-11-28 10:44:44.151325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:05.462 [2024-11-28 10:44:44.151497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.462 [2024-11-28 10:44:44.151517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:05.462 [2024-11-28 10:44:44.151531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.462 [2024-11-28 10:44:44.151546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:05.462 passed 00:12:05.462 Test: blockdev nvme passthru rw ...passed 00:12:05.462 Test: blockdev nvme passthru vendor specific ...[2024-11-28 10:44:44.151885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:05.462 [2024-11-28 10:44:44.151909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:05.462 [2024-11-28 10:44:44.151970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:05.462 [2024-11-28 10:44:44.151987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:05.462 [2024-11-28 10:44:44.152046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:05.462 [2024-11-28 10:44:44.152066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:05.462 [2024-11-28 10:44:44.152117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:05.462 [2024-11-28 10:44:44.152133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:05.462 passed 00:12:05.462 Test: blockdev nvme admin passthru ...passed 00:12:05.462 Test: blockdev copy ...passed 00:12:05.462 00:12:05.462 Run Summary: Type Total Ran Passed Failed Inactive 00:12:05.462 suites 1 1 n/a 0 0 00:12:05.462 tests 23 23 23 0 0 00:12:05.462 asserts 152 152 152 0 n/a 00:12:05.462 00:12:05.462 Elapsed time = 0.373 seconds 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:06.404 rmmod nvme_rdma 00:12:06.404 rmmod nvme_fabrics 00:12:06.404 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1585733 ']' 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1585733 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1585733 ']' 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1585733 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1585733 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1585733' 00:12:06.405 killing process with pid 1585733 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1585733 00:12:06.405 10:44:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1585733 00:12:08.315 10:44:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:08.315 10:44:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:08.315 00:12:08.315 real 0m13.134s 00:12:08.315 user 0m23.179s 00:12:08.315 sys 0m6.911s 00:12:08.315 10:44:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.315 10:44:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.315 ************************************ 00:12:08.315 END TEST nvmf_bdevio 00:12:08.315 ************************************ 00:12:08.315 10:44:47 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:08.315 00:12:08.315 real 5m1.518s 00:12:08.315 user 12m30.443s 00:12:08.315 sys 1m57.465s 00:12:08.315 10:44:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.315 10:44:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:08.315 ************************************ 00:12:08.315 END TEST nvmf_target_core 00:12:08.315 ************************************ 00:12:08.315 10:44:47 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:08.315 10:44:47 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.315 10:44:47 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.315 10:44:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:08.315 ************************************ 00:12:08.315 START TEST nvmf_target_extra 00:12:08.315 ************************************ 00:12:08.315 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:08.575 * Looking for test storage... 00:12:08.575 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:08.575 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:08.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.576 --rc genhtml_branch_coverage=1 00:12:08.576 --rc genhtml_function_coverage=1 00:12:08.576 --rc genhtml_legend=1 00:12:08.576 --rc geninfo_all_blocks=1 00:12:08.576 --rc geninfo_unexecuted_blocks=1 00:12:08.576 00:12:08.576 ' 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:08.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.576 --rc genhtml_branch_coverage=1 00:12:08.576 --rc genhtml_function_coverage=1 00:12:08.576 --rc genhtml_legend=1 00:12:08.576 --rc geninfo_all_blocks=1 00:12:08.576 --rc geninfo_unexecuted_blocks=1 00:12:08.576 00:12:08.576 ' 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:08.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.576 --rc genhtml_branch_coverage=1 00:12:08.576 --rc genhtml_function_coverage=1 00:12:08.576 --rc genhtml_legend=1 00:12:08.576 --rc geninfo_all_blocks=1 00:12:08.576 --rc geninfo_unexecuted_blocks=1 00:12:08.576 00:12:08.576 ' 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:08.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.576 --rc genhtml_branch_coverage=1 00:12:08.576 --rc genhtml_function_coverage=1 00:12:08.576 --rc genhtml_legend=1 00:12:08.576 --rc geninfo_all_blocks=1 00:12:08.576 --rc geninfo_unexecuted_blocks=1 00:12:08.576 00:12:08.576 ' 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.576 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.576 10:44:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.577 ************************************ 00:12:08.577 START TEST nvmf_example 00:12:08.577 ************************************ 00:12:08.577 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:08.841 * Looking for test storage... 00:12:08.841 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:08.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.841 --rc genhtml_branch_coverage=1 00:12:08.841 --rc genhtml_function_coverage=1 00:12:08.841 --rc genhtml_legend=1 00:12:08.841 --rc geninfo_all_blocks=1 00:12:08.841 --rc geninfo_unexecuted_blocks=1 00:12:08.841 00:12:08.841 ' 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:08.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.841 --rc genhtml_branch_coverage=1 00:12:08.841 --rc genhtml_function_coverage=1 00:12:08.841 --rc genhtml_legend=1 00:12:08.841 --rc geninfo_all_blocks=1 00:12:08.841 --rc geninfo_unexecuted_blocks=1 00:12:08.841 00:12:08.841 ' 00:12:08.841 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:08.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.841 --rc genhtml_branch_coverage=1 00:12:08.841 --rc genhtml_function_coverage=1 00:12:08.841 --rc genhtml_legend=1 00:12:08.842 --rc geninfo_all_blocks=1 00:12:08.842 --rc geninfo_unexecuted_blocks=1 00:12:08.842 00:12:08.842 ' 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:08.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.842 --rc genhtml_branch_coverage=1 00:12:08.842 --rc genhtml_function_coverage=1 00:12:08.842 --rc genhtml_legend=1 00:12:08.842 --rc geninfo_all_blocks=1 00:12:08.842 --rc geninfo_unexecuted_blocks=1 00:12:08.842 00:12:08.842 ' 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.842 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.842 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:17.076 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:17.076 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:17.076 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.076 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:17.077 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:17.077 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:17.077 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:17.077 altname enp217s0f0np0 00:12:17.077 altname ens818f0np0 00:12:17.077 inet 192.168.100.8/24 scope global mlx_0_0 00:12:17.077 valid_lft forever preferred_lft forever 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:17.077 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:17.077 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:17.077 altname enp217s0f1np1 00:12:17.077 altname ens818f1np1 00:12:17.077 inet 192.168.100.9/24 scope global mlx_0_1 00:12:17.077 valid_lft forever preferred_lft forever 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:17.077 192.168.100.9' 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:17.077 192.168.100.9' 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:17.077 192.168.100.9' 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:17.077 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1590774 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1590774 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1590774 ']' 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.078 10:44:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.338 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.338 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:17.338 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:17.338 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.338 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.598 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:17.598 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.598 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.858 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.858 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:17.858 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.858 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.858 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:17.859 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:30.088 Initializing NVMe Controllers 00:12:30.088 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:30.088 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:30.088 Initialization complete. Launching workers. 00:12:30.088 ======================================================== 00:12:30.088 Latency(us) 00:12:30.088 Device Information : IOPS MiB/s Average min max 00:12:30.088 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 23692.30 92.55 2700.83 749.30 19041.59 00:12:30.088 ======================================================== 00:12:30.088 Total : 23692.30 92.55 2700.83 749.30 19041.59 00:12:30.088 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:30.088 rmmod nvme_rdma 00:12:30.088 rmmod nvme_fabrics 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1590774 ']' 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1590774 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1590774 ']' 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1590774 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1590774 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1590774' 00:12:30.088 killing process with pid 1590774 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1590774 00:12:30.088 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1590774 00:12:31.472 nvmf threads initialize successfully 00:12:31.472 bdev subsystem init successfully 00:12:31.472 created a nvmf target service 00:12:31.472 create targets's poll groups done 00:12:31.472 all subsystems of target started 00:12:31.472 nvmf target is running 00:12:31.472 all subsystems of target stopped 00:12:31.472 destroy targets's poll groups done 00:12:31.472 destroyed the nvmf target service 00:12:31.472 bdev subsystem finish successfully 00:12:31.472 nvmf threads destroy successfully 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:31.472 00:12:31.472 real 0m22.635s 00:12:31.472 user 0m58.481s 00:12:31.472 sys 0m6.571s 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:31.472 ************************************ 00:12:31.472 END TEST nvmf_example 00:12:31.472 ************************************ 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.472 10:45:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.472 ************************************ 00:12:31.472 START TEST nvmf_filesystem 00:12:31.472 ************************************ 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:31.473 * Looking for test storage... 00:12:31.473 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:31.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.473 --rc genhtml_branch_coverage=1 00:12:31.473 --rc genhtml_function_coverage=1 00:12:31.473 --rc genhtml_legend=1 00:12:31.473 --rc geninfo_all_blocks=1 00:12:31.473 --rc geninfo_unexecuted_blocks=1 00:12:31.473 00:12:31.473 ' 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:31.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.473 --rc genhtml_branch_coverage=1 00:12:31.473 --rc genhtml_function_coverage=1 00:12:31.473 --rc genhtml_legend=1 00:12:31.473 --rc geninfo_all_blocks=1 00:12:31.473 --rc geninfo_unexecuted_blocks=1 00:12:31.473 00:12:31.473 ' 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:31.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.473 --rc genhtml_branch_coverage=1 00:12:31.473 --rc genhtml_function_coverage=1 00:12:31.473 --rc genhtml_legend=1 00:12:31.473 --rc geninfo_all_blocks=1 00:12:31.473 --rc geninfo_unexecuted_blocks=1 00:12:31.473 00:12:31.473 ' 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:31.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.473 --rc genhtml_branch_coverage=1 00:12:31.473 --rc genhtml_function_coverage=1 00:12:31.473 --rc genhtml_legend=1 00:12:31.473 --rc geninfo_all_blocks=1 00:12:31.473 --rc geninfo_unexecuted_blocks=1 00:12:31.473 00:12:31.473 ' 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:31.473 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:12:31.474 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:31.474 #define SPDK_CONFIG_H 00:12:31.474 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:31.474 #define SPDK_CONFIG_APPS 1 00:12:31.474 #define SPDK_CONFIG_ARCH native 00:12:31.474 #define SPDK_CONFIG_ASAN 1 00:12:31.474 #undef SPDK_CONFIG_AVAHI 00:12:31.474 #undef SPDK_CONFIG_CET 00:12:31.474 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:31.474 #define SPDK_CONFIG_COVERAGE 1 00:12:31.474 #define SPDK_CONFIG_CROSS_PREFIX 00:12:31.474 #undef SPDK_CONFIG_CRYPTO 00:12:31.474 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:31.474 #undef SPDK_CONFIG_CUSTOMOCF 00:12:31.474 #undef SPDK_CONFIG_DAOS 00:12:31.474 #define SPDK_CONFIG_DAOS_DIR 00:12:31.474 #define SPDK_CONFIG_DEBUG 1 00:12:31.474 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:31.474 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:31.474 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:31.474 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:31.474 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:31.474 #undef SPDK_CONFIG_DPDK_UADK 00:12:31.474 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:31.474 #define SPDK_CONFIG_EXAMPLES 1 00:12:31.475 #undef SPDK_CONFIG_FC 00:12:31.475 #define SPDK_CONFIG_FC_PATH 00:12:31.475 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:31.475 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:31.475 #define SPDK_CONFIG_FSDEV 1 00:12:31.475 #undef SPDK_CONFIG_FUSE 00:12:31.475 #undef SPDK_CONFIG_FUZZER 00:12:31.475 #define SPDK_CONFIG_FUZZER_LIB 00:12:31.475 #undef SPDK_CONFIG_GOLANG 00:12:31.475 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:31.475 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:31.475 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:31.475 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:31.475 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:31.475 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:31.475 #undef SPDK_CONFIG_HAVE_LZ4 00:12:31.475 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:31.475 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:31.475 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:31.475 #define SPDK_CONFIG_IDXD 1 00:12:31.475 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:31.475 #undef SPDK_CONFIG_IPSEC_MB 00:12:31.475 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:31.475 #define SPDK_CONFIG_ISAL 1 00:12:31.475 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:31.475 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:31.475 #define SPDK_CONFIG_LIBDIR 00:12:31.475 #undef SPDK_CONFIG_LTO 00:12:31.475 #define SPDK_CONFIG_MAX_LCORES 128 00:12:31.475 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:31.475 #define SPDK_CONFIG_NVME_CUSE 1 00:12:31.475 #undef SPDK_CONFIG_OCF 00:12:31.475 #define SPDK_CONFIG_OCF_PATH 00:12:31.475 #define SPDK_CONFIG_OPENSSL_PATH 00:12:31.475 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:31.475 #define SPDK_CONFIG_PGO_DIR 00:12:31.475 #undef SPDK_CONFIG_PGO_USE 00:12:31.475 #define SPDK_CONFIG_PREFIX /usr/local 00:12:31.475 #undef SPDK_CONFIG_RAID5F 00:12:31.475 #undef SPDK_CONFIG_RBD 00:12:31.475 #define SPDK_CONFIG_RDMA 1 00:12:31.475 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:31.475 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:31.475 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:31.475 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:31.475 #define SPDK_CONFIG_SHARED 1 00:12:31.475 #undef SPDK_CONFIG_SMA 00:12:31.475 #define SPDK_CONFIG_TESTS 1 00:12:31.475 #undef SPDK_CONFIG_TSAN 00:12:31.475 #define SPDK_CONFIG_UBLK 1 00:12:31.475 #define SPDK_CONFIG_UBSAN 1 00:12:31.475 #undef SPDK_CONFIG_UNIT_TESTS 00:12:31.475 #undef SPDK_CONFIG_URING 00:12:31.475 #define SPDK_CONFIG_URING_PATH 00:12:31.475 #undef SPDK_CONFIG_URING_ZNS 00:12:31.475 #undef SPDK_CONFIG_USDT 00:12:31.475 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:31.475 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:31.475 #undef SPDK_CONFIG_VFIO_USER 00:12:31.475 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:31.475 #define SPDK_CONFIG_VHOST 1 00:12:31.475 #define SPDK_CONFIG_VIRTIO 1 00:12:31.475 #undef SPDK_CONFIG_VTUNE 00:12:31.475 #define SPDK_CONFIG_VTUNE_DIR 00:12:31.475 #define SPDK_CONFIG_WERROR 1 00:12:31.475 #define SPDK_CONFIG_WPDK_DIR 00:12:31.475 #undef SPDK_CONFIG_XNVME 00:12:31.475 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:31.475 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:31.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:31.738 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:31.739 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1594048 ]] 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1594048 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.VSm2pr 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VSm2pr/tests/target /tmp/spdk.VSm2pr 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=54710513664 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730586624 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7020072960 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30850498560 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865293312 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=14794752 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12322701312 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346118144 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23416832 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30863663104 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865293312 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1630208 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:31.740 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:31.741 * Looking for test storage... 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=54710513664 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9234665472 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:31.741 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:31.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.741 --rc genhtml_branch_coverage=1 00:12:31.741 --rc genhtml_function_coverage=1 00:12:31.741 --rc genhtml_legend=1 00:12:31.741 --rc geninfo_all_blocks=1 00:12:31.741 --rc geninfo_unexecuted_blocks=1 00:12:31.741 00:12:31.741 ' 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:31.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.741 --rc genhtml_branch_coverage=1 00:12:31.741 --rc genhtml_function_coverage=1 00:12:31.741 --rc genhtml_legend=1 00:12:31.741 --rc geninfo_all_blocks=1 00:12:31.741 --rc geninfo_unexecuted_blocks=1 00:12:31.741 00:12:31.741 ' 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:31.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.741 --rc genhtml_branch_coverage=1 00:12:31.741 --rc genhtml_function_coverage=1 00:12:31.741 --rc genhtml_legend=1 00:12:31.741 --rc geninfo_all_blocks=1 00:12:31.741 --rc geninfo_unexecuted_blocks=1 00:12:31.741 00:12:31.741 ' 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:31.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.741 --rc genhtml_branch_coverage=1 00:12:31.741 --rc genhtml_function_coverage=1 00:12:31.741 --rc genhtml_legend=1 00:12:31.741 --rc geninfo_all_blocks=1 00:12:31.741 --rc geninfo_unexecuted_blocks=1 00:12:31.741 00:12:31.741 ' 00:12:31.741 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.742 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:31.742 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.871 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:39.872 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:39.872 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:39.872 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:39.872 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:39.872 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:39.872 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:39.872 altname enp217s0f0np0 00:12:39.872 altname ens818f0np0 00:12:39.872 inet 192.168.100.8/24 scope global mlx_0_0 00:12:39.872 valid_lft forever preferred_lft forever 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:39.872 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:39.873 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:39.873 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:39.873 altname enp217s0f1np1 00:12:39.873 altname ens818f1np1 00:12:39.873 inet 192.168.100.9/24 scope global mlx_0_1 00:12:39.873 valid_lft forever preferred_lft forever 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:39.873 192.168.100.9' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:39.873 192.168.100.9' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:39.873 192.168.100.9' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.873 ************************************ 00:12:39.873 START TEST nvmf_filesystem_no_in_capsule 00:12:39.873 ************************************ 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1597962 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1597962 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1597962 ']' 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.873 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.873 [2024-11-28 10:45:18.719098] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:39.873 [2024-11-28 10:45:18.719192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.133 [2024-11-28 10:45:18.873510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.133 [2024-11-28 10:45:18.972103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.133 [2024-11-28 10:45:18.972153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.133 [2024-11-28 10:45:18.972165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.133 [2024-11-28 10:45:18.972178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.133 [2024-11-28 10:45:18.972187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.133 [2024-11-28 10:45:18.974598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.133 [2024-11-28 10:45:18.974678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.133 [2024-11-28 10:45:18.974707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.133 [2024-11-28 10:45:18.974716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.702 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.702 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:40.702 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:40.702 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:40.702 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.702 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.702 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:40.702 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:40.702 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.702 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.702 [2024-11-28 10:45:19.577336] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:40.962 [2024-11-28 10:45:19.618086] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f7ddd5bd940) succeed. 00:12:40.962 [2024-11-28 10:45:19.627606] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f7ddd579940) succeed. 00:12:40.962 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.962 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:40.963 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.963 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.531 Malloc1 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.531 [2024-11-28 10:45:20.293466] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.531 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:41.532 { 00:12:41.532 "name": "Malloc1", 00:12:41.532 "aliases": [ 00:12:41.532 "f12f6f87-0706-4a9a-9541-1fe03e674499" 00:12:41.532 ], 00:12:41.532 "product_name": "Malloc disk", 00:12:41.532 "block_size": 512, 00:12:41.532 "num_blocks": 1048576, 00:12:41.532 "uuid": "f12f6f87-0706-4a9a-9541-1fe03e674499", 00:12:41.532 "assigned_rate_limits": { 00:12:41.532 "rw_ios_per_sec": 0, 00:12:41.532 "rw_mbytes_per_sec": 0, 00:12:41.532 "r_mbytes_per_sec": 0, 00:12:41.532 "w_mbytes_per_sec": 0 00:12:41.532 }, 00:12:41.532 "claimed": true, 00:12:41.532 "claim_type": "exclusive_write", 00:12:41.532 "zoned": false, 00:12:41.532 "supported_io_types": { 00:12:41.532 "read": true, 00:12:41.532 "write": true, 00:12:41.532 "unmap": true, 00:12:41.532 "flush": true, 00:12:41.532 "reset": true, 00:12:41.532 "nvme_admin": false, 00:12:41.532 "nvme_io": false, 00:12:41.532 "nvme_io_md": false, 00:12:41.532 "write_zeroes": true, 00:12:41.532 "zcopy": true, 00:12:41.532 "get_zone_info": false, 00:12:41.532 "zone_management": false, 00:12:41.532 "zone_append": false, 00:12:41.532 "compare": false, 00:12:41.532 "compare_and_write": false, 00:12:41.532 "abort": true, 00:12:41.532 "seek_hole": false, 00:12:41.532 "seek_data": false, 00:12:41.532 "copy": true, 00:12:41.532 "nvme_iov_md": false 00:12:41.532 }, 00:12:41.532 "memory_domains": [ 00:12:41.532 { 00:12:41.532 "dma_device_id": "system", 00:12:41.532 "dma_device_type": 1 00:12:41.532 }, 00:12:41.532 { 00:12:41.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.532 "dma_device_type": 2 00:12:41.532 } 00:12:41.532 ], 00:12:41.532 "driver_specific": {} 00:12:41.532 } 00:12:41.532 ]' 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:41.532 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:42.911 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.911 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:42.911 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.911 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:42.911 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:44.818 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:45.078 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.015 ************************************ 00:12:46.015 START TEST filesystem_ext4 00:12:46.015 ************************************ 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:46.015 mke2fs 1.47.0 (5-Feb-2023) 00:12:46.015 Discarding device blocks: 0/522240 done 00:12:46.015 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:46.015 Filesystem UUID: 765ca60e-bef3-4969-b4d3-c9713928e8cc 00:12:46.015 Superblock backups stored on blocks: 00:12:46.015 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:46.015 00:12:46.015 Allocating group tables: 0/64 done 00:12:46.015 Writing inode tables: 0/64 done 00:12:46.015 Creating journal (8192 blocks): done 00:12:46.015 Writing superblocks and filesystem accounting information: 0/64 done 00:12:46.015 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:46.015 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:46.275 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:46.275 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:46.275 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:46.275 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:46.275 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:46.275 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:46.275 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1597962 00:12:46.275 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:46.276 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:46.276 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:46.276 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:46.276 00:12:46.276 real 0m0.208s 00:12:46.276 user 0m0.025s 00:12:46.276 sys 0m0.085s 00:12:46.276 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.276 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:46.276 ************************************ 00:12:46.276 END TEST filesystem_ext4 00:12:46.276 ************************************ 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.276 ************************************ 00:12:46.276 START TEST filesystem_btrfs 00:12:46.276 ************************************ 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:46.276 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:46.534 btrfs-progs v6.8.1 00:12:46.535 See https://btrfs.readthedocs.io for more information. 00:12:46.535 00:12:46.535 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:46.535 NOTE: several default settings have changed in version 5.15, please make sure 00:12:46.535 this does not affect your deployments: 00:12:46.535 - DUP for metadata (-m dup) 00:12:46.535 - enabled no-holes (-O no-holes) 00:12:46.535 - enabled free-space-tree (-R free-space-tree) 00:12:46.535 00:12:46.535 Label: (null) 00:12:46.535 UUID: 32386b68-e4fd-4167-af99-925f7416635a 00:12:46.535 Node size: 16384 00:12:46.535 Sector size: 4096 (CPU page size: 4096) 00:12:46.535 Filesystem size: 510.00MiB 00:12:46.535 Block group profiles: 00:12:46.535 Data: single 8.00MiB 00:12:46.535 Metadata: DUP 32.00MiB 00:12:46.535 System: DUP 8.00MiB 00:12:46.535 SSD detected: yes 00:12:46.535 Zoned device: no 00:12:46.535 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:46.535 Checksum: crc32c 00:12:46.535 Number of devices: 1 00:12:46.535 Devices: 00:12:46.535 ID SIZE PATH 00:12:46.535 1 510.00MiB /dev/nvme0n1p1 00:12:46.535 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1597962 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:46.535 00:12:46.535 real 0m0.248s 00:12:46.535 user 0m0.034s 00:12:46.535 sys 0m0.119s 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:46.535 ************************************ 00:12:46.535 END TEST filesystem_btrfs 00:12:46.535 ************************************ 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.535 ************************************ 00:12:46.535 START TEST filesystem_xfs 00:12:46.535 ************************************ 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:46.535 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:46.794 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:46.794 = sectsz=512 attr=2, projid32bit=1 00:12:46.794 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:46.794 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:46.794 data = bsize=4096 blocks=130560, imaxpct=25 00:12:46.794 = sunit=0 swidth=0 blks 00:12:46.794 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:46.794 log =internal log bsize=4096 blocks=16384, version=2 00:12:46.794 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:46.794 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:46.794 Discarding blocks...Done. 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1597962 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:46.794 00:12:46.794 real 0m0.220s 00:12:46.794 user 0m0.029s 00:12:46.794 sys 0m0.086s 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:46.794 ************************************ 00:12:46.794 END TEST filesystem_xfs 00:12:46.794 ************************************ 00:12:46.794 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:46.795 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:47.054 10:45:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.992 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.992 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:47.992 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:47.992 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.992 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:47.992 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.992 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:47.992 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.992 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1597962 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1597962 ']' 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1597962 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1597962 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1597962' 00:12:47.993 killing process with pid 1597962 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1597962 00:12:47.993 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1597962 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:51.287 00:12:51.287 real 0m10.832s 00:12:51.287 user 0m40.683s 00:12:51.287 sys 0m1.462s 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:51.287 ************************************ 00:12:51.287 END TEST nvmf_filesystem_no_in_capsule 00:12:51.287 ************************************ 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:51.287 ************************************ 00:12:51.287 START TEST nvmf_filesystem_in_capsule 00:12:51.287 ************************************ 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1600041 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1600041 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1600041 ']' 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.287 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.288 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.288 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.288 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:51.288 [2024-11-28 10:45:29.609091] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:51.288 [2024-11-28 10:45:29.609181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.288 [2024-11-28 10:45:29.763934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.288 [2024-11-28 10:45:29.861989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.288 [2024-11-28 10:45:29.862036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.288 [2024-11-28 10:45:29.862047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.288 [2024-11-28 10:45:29.862059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.288 [2024-11-28 10:45:29.862068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.288 [2024-11-28 10:45:29.864497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.288 [2024-11-28 10:45:29.864572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.288 [2024-11-28 10:45:29.864657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.288 [2024-11-28 10:45:29.864658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.857 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.857 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:51.857 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.857 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.857 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:51.857 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.857 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:51.857 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:12:51.857 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.857 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:51.857 [2024-11-28 10:45:30.525154] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f335d571940) succeed. 00:12:51.857 [2024-11-28 10:45:30.534859] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f335d52d940) succeed. 00:12:52.116 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.116 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:52.116 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.117 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.686 Malloc1 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.686 [2024-11-28 10:45:31.298535] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:52.686 { 00:12:52.686 "name": "Malloc1", 00:12:52.686 "aliases": [ 00:12:52.686 "9b591acc-1a89-4565-80f2-ef8d33b6a16e" 00:12:52.686 ], 00:12:52.686 "product_name": "Malloc disk", 00:12:52.686 "block_size": 512, 00:12:52.686 "num_blocks": 1048576, 00:12:52.686 "uuid": "9b591acc-1a89-4565-80f2-ef8d33b6a16e", 00:12:52.686 "assigned_rate_limits": { 00:12:52.686 "rw_ios_per_sec": 0, 00:12:52.686 "rw_mbytes_per_sec": 0, 00:12:52.686 "r_mbytes_per_sec": 0, 00:12:52.686 "w_mbytes_per_sec": 0 00:12:52.686 }, 00:12:52.686 "claimed": true, 00:12:52.686 "claim_type": "exclusive_write", 00:12:52.686 "zoned": false, 00:12:52.686 "supported_io_types": { 00:12:52.686 "read": true, 00:12:52.686 "write": true, 00:12:52.686 "unmap": true, 00:12:52.686 "flush": true, 00:12:52.686 "reset": true, 00:12:52.686 "nvme_admin": false, 00:12:52.686 "nvme_io": false, 00:12:52.686 "nvme_io_md": false, 00:12:52.686 "write_zeroes": true, 00:12:52.686 "zcopy": true, 00:12:52.686 "get_zone_info": false, 00:12:52.686 "zone_management": false, 00:12:52.686 "zone_append": false, 00:12:52.686 "compare": false, 00:12:52.686 "compare_and_write": false, 00:12:52.686 "abort": true, 00:12:52.686 "seek_hole": false, 00:12:52.686 "seek_data": false, 00:12:52.686 "copy": true, 00:12:52.686 "nvme_iov_md": false 00:12:52.686 }, 00:12:52.686 "memory_domains": [ 00:12:52.686 { 00:12:52.686 "dma_device_id": "system", 00:12:52.686 "dma_device_type": 1 00:12:52.686 }, 00:12:52.686 { 00:12:52.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.686 "dma_device_type": 2 00:12:52.686 } 00:12:52.686 ], 00:12:52.686 "driver_specific": {} 00:12:52.686 } 00:12:52.686 ]' 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:52.686 10:45:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:53.625 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.625 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:53.625 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.625 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:53.625 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:55.534 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:55.534 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:55.534 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.534 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:55.534 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.534 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:55.794 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:56.054 10:45:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.009 ************************************ 00:12:57.009 START TEST filesystem_in_capsule_ext4 00:12:57.009 ************************************ 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:57.009 mke2fs 1.47.0 (5-Feb-2023) 00:12:57.009 Discarding device blocks: 0/522240 done 00:12:57.009 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:57.009 Filesystem UUID: 4de24811-a581-4a3e-947a-4a1c7aca94f7 00:12:57.009 Superblock backups stored on blocks: 00:12:57.009 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:57.009 00:12:57.009 Allocating group tables: 0/64 done 00:12:57.009 Writing inode tables: 0/64 done 00:12:57.009 Creating journal (8192 blocks): done 00:12:57.009 Writing superblocks and filesystem accounting information: 0/64 done 00:12:57.009 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:57.009 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1600041 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.268 00:12:57.268 real 0m0.194s 00:12:57.268 user 0m0.026s 00:12:57.268 sys 0m0.070s 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.268 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:57.268 ************************************ 00:12:57.268 END TEST filesystem_in_capsule_ext4 00:12:57.268 ************************************ 00:12:57.268 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:57.268 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.268 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.268 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.268 ************************************ 00:12:57.268 START TEST filesystem_in_capsule_btrfs 00:12:57.268 ************************************ 00:12:57.268 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:57.269 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:57.269 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:57.269 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:57.269 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:57.269 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:57.269 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:57.269 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:57.269 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:57.269 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:57.269 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:57.529 btrfs-progs v6.8.1 00:12:57.529 See https://btrfs.readthedocs.io for more information. 00:12:57.529 00:12:57.529 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:57.529 NOTE: several default settings have changed in version 5.15, please make sure 00:12:57.529 this does not affect your deployments: 00:12:57.529 - DUP for metadata (-m dup) 00:12:57.529 - enabled no-holes (-O no-holes) 00:12:57.529 - enabled free-space-tree (-R free-space-tree) 00:12:57.529 00:12:57.529 Label: (null) 00:12:57.529 UUID: 89edba17-ca8a-478e-847d-6dce91eda970 00:12:57.529 Node size: 16384 00:12:57.529 Sector size: 4096 (CPU page size: 4096) 00:12:57.529 Filesystem size: 510.00MiB 00:12:57.529 Block group profiles: 00:12:57.529 Data: single 8.00MiB 00:12:57.529 Metadata: DUP 32.00MiB 00:12:57.529 System: DUP 8.00MiB 00:12:57.529 SSD detected: yes 00:12:57.529 Zoned device: no 00:12:57.529 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:57.529 Checksum: crc32c 00:12:57.529 Number of devices: 1 00:12:57.529 Devices: 00:12:57.529 ID SIZE PATH 00:12:57.529 1 510.00MiB /dev/nvme0n1p1 00:12:57.529 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1600041 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.529 00:12:57.529 real 0m0.258s 00:12:57.529 user 0m0.035s 00:12:57.529 sys 0m0.127s 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:57.529 ************************************ 00:12:57.529 END TEST filesystem_in_capsule_btrfs 00:12:57.529 ************************************ 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.529 ************************************ 00:12:57.529 START TEST filesystem_in_capsule_xfs 00:12:57.529 ************************************ 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:57.529 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:57.790 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:57.790 = sectsz=512 attr=2, projid32bit=1 00:12:57.790 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:57.790 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:57.790 data = bsize=4096 blocks=130560, imaxpct=25 00:12:57.790 = sunit=0 swidth=0 blks 00:12:57.790 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:57.790 log =internal log bsize=4096 blocks=16384, version=2 00:12:57.790 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:57.790 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:57.790 Discarding blocks...Done. 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1600041 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.790 00:12:57.790 real 0m0.227s 00:12:57.790 user 0m0.026s 00:12:57.790 sys 0m0.086s 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.790 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:57.790 ************************************ 00:12:57.790 END TEST filesystem_in_capsule_xfs 00:12:57.790 ************************************ 00:12:58.050 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:58.050 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:58.050 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1600041 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1600041 ']' 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1600041 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1600041 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1600041' 00:12:58.988 killing process with pid 1600041 00:12:58.988 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1600041 00:12:58.989 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1600041 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:02.292 00:13:02.292 real 0m11.282s 00:13:02.292 user 0m41.963s 00:13:02.292 sys 0m1.505s 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.292 ************************************ 00:13:02.292 END TEST nvmf_filesystem_in_capsule 00:13:02.292 ************************************ 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:02.292 rmmod nvme_rdma 00:13:02.292 rmmod nvme_fabrics 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:02.292 00:13:02.292 real 0m30.778s 00:13:02.292 user 1m24.982s 00:13:02.292 sys 0m9.492s 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.292 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:02.292 ************************************ 00:13:02.292 END TEST nvmf_filesystem 00:13:02.292 ************************************ 00:13:02.293 10:45:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:02.293 10:45:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.293 10:45:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.293 10:45:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:02.293 ************************************ 00:13:02.293 START TEST nvmf_target_discovery 00:13:02.293 ************************************ 00:13:02.293 10:45:40 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:02.293 * Looking for test storage... 00:13:02.293 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:02.293 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:02.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.554 --rc genhtml_branch_coverage=1 00:13:02.554 --rc genhtml_function_coverage=1 00:13:02.554 --rc genhtml_legend=1 00:13:02.554 --rc geninfo_all_blocks=1 00:13:02.554 --rc geninfo_unexecuted_blocks=1 00:13:02.554 00:13:02.554 ' 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:02.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.554 --rc genhtml_branch_coverage=1 00:13:02.554 --rc genhtml_function_coverage=1 00:13:02.554 --rc genhtml_legend=1 00:13:02.554 --rc geninfo_all_blocks=1 00:13:02.554 --rc geninfo_unexecuted_blocks=1 00:13:02.554 00:13:02.554 ' 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:02.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.554 --rc genhtml_branch_coverage=1 00:13:02.554 --rc genhtml_function_coverage=1 00:13:02.554 --rc genhtml_legend=1 00:13:02.554 --rc geninfo_all_blocks=1 00:13:02.554 --rc geninfo_unexecuted_blocks=1 00:13:02.554 00:13:02.554 ' 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:02.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.554 --rc genhtml_branch_coverage=1 00:13:02.554 --rc genhtml_function_coverage=1 00:13:02.554 --rc genhtml_legend=1 00:13:02.554 --rc geninfo_all_blocks=1 00:13:02.554 --rc geninfo_unexecuted_blocks=1 00:13:02.554 00:13:02.554 ' 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:02.554 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:02.555 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:02.555 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.543 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.543 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:12.543 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:12.543 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:12.543 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:12.543 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:12.543 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:12.544 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:12.544 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:12.544 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:12.544 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:12.544 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:12.545 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:12.545 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:12.545 altname enp217s0f0np0 00:13:12.545 altname ens818f0np0 00:13:12.545 inet 192.168.100.8/24 scope global mlx_0_0 00:13:12.545 valid_lft forever preferred_lft forever 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:12.545 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:12.545 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:12.545 altname enp217s0f1np1 00:13:12.545 altname ens818f1np1 00:13:12.545 inet 192.168.100.9/24 scope global mlx_0_1 00:13:12.545 valid_lft forever preferred_lft forever 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:12.545 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:12.546 192.168.100.9' 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:12.546 192.168.100.9' 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:12.546 192.168.100.9' 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1606287 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1606287 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1606287 ']' 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.546 10:45:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.546 [2024-11-28 10:45:50.017281] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:12.546 [2024-11-28 10:45:50.017400] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.546 [2024-11-28 10:45:50.176186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.546 [2024-11-28 10:45:50.279374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.546 [2024-11-28 10:45:50.279429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.546 [2024-11-28 10:45:50.279442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.546 [2024-11-28 10:45:50.279456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.546 [2024-11-28 10:45:50.279466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.546 [2024-11-28 10:45:50.281804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.546 [2024-11-28 10:45:50.281841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.546 [2024-11-28 10:45:50.281940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.546 [2024-11-28 10:45:50.281948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.546 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.546 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:12.546 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:12.546 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.546 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.546 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.546 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:12.546 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.546 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.546 [2024-11-28 10:45:50.917719] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f254b39a940) succeed. 00:13:12.546 [2024-11-28 10:45:50.927218] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f254b356940) succeed. 00:13:12.546 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.546 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:12.546 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:12.546 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 Null1 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 [2024-11-28 10:45:51.232477] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 Null2 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 Null3 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 Null4 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:12.547 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.548 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.548 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.548 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:13:12.548 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.548 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.548 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.548 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:13:12.807 00:13:12.807 Discovery Log Number of Records 6, Generation counter 6 00:13:12.807 =====Discovery Log Entry 0====== 00:13:12.807 trtype: rdma 00:13:12.807 adrfam: ipv4 00:13:12.807 subtype: current discovery subsystem 00:13:12.807 treq: not required 00:13:12.807 portid: 0 00:13:12.807 trsvcid: 4420 00:13:12.807 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:12.807 traddr: 192.168.100.8 00:13:12.807 eflags: explicit discovery connections, duplicate discovery information 00:13:12.807 rdma_prtype: not specified 00:13:12.807 rdma_qptype: connected 00:13:12.807 rdma_cms: rdma-cm 00:13:12.807 rdma_pkey: 0x0000 00:13:12.807 =====Discovery Log Entry 1====== 00:13:12.807 trtype: rdma 00:13:12.807 adrfam: ipv4 00:13:12.807 subtype: nvme subsystem 00:13:12.807 treq: not required 00:13:12.807 portid: 0 00:13:12.807 trsvcid: 4420 00:13:12.807 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:12.807 traddr: 192.168.100.8 00:13:12.807 eflags: none 00:13:12.807 rdma_prtype: not specified 00:13:12.807 rdma_qptype: connected 00:13:12.807 rdma_cms: rdma-cm 00:13:12.807 rdma_pkey: 0x0000 00:13:12.807 =====Discovery Log Entry 2====== 00:13:12.807 trtype: rdma 00:13:12.807 adrfam: ipv4 00:13:12.807 subtype: nvme subsystem 00:13:12.807 treq: not required 00:13:12.807 portid: 0 00:13:12.807 trsvcid: 4420 00:13:12.807 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:12.807 traddr: 192.168.100.8 00:13:12.807 eflags: none 00:13:12.807 rdma_prtype: not specified 00:13:12.807 rdma_qptype: connected 00:13:12.807 rdma_cms: rdma-cm 00:13:12.807 rdma_pkey: 0x0000 00:13:12.807 =====Discovery Log Entry 3====== 00:13:12.807 trtype: rdma 00:13:12.807 adrfam: ipv4 00:13:12.807 subtype: nvme subsystem 00:13:12.807 treq: not required 00:13:12.807 portid: 0 00:13:12.807 trsvcid: 4420 00:13:12.807 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:12.807 traddr: 192.168.100.8 00:13:12.807 eflags: none 00:13:12.807 rdma_prtype: not specified 00:13:12.807 rdma_qptype: connected 00:13:12.807 rdma_cms: rdma-cm 00:13:12.807 rdma_pkey: 0x0000 00:13:12.807 =====Discovery Log Entry 4====== 00:13:12.807 trtype: rdma 00:13:12.807 adrfam: ipv4 00:13:12.807 subtype: nvme subsystem 00:13:12.807 treq: not required 00:13:12.807 portid: 0 00:13:12.807 trsvcid: 4420 00:13:12.807 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:12.807 traddr: 192.168.100.8 00:13:12.807 eflags: none 00:13:12.807 rdma_prtype: not specified 00:13:12.807 rdma_qptype: connected 00:13:12.807 rdma_cms: rdma-cm 00:13:12.807 rdma_pkey: 0x0000 00:13:12.807 =====Discovery Log Entry 5====== 00:13:12.807 trtype: rdma 00:13:12.807 adrfam: ipv4 00:13:12.807 subtype: discovery subsystem referral 00:13:12.807 treq: not required 00:13:12.807 portid: 0 00:13:12.807 trsvcid: 4430 00:13:12.807 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:12.807 traddr: 192.168.100.8 00:13:12.807 eflags: none 00:13:12.807 rdma_prtype: unrecognized 00:13:12.807 rdma_qptype: unrecognized 00:13:12.807 rdma_cms: unrecognized 00:13:12.807 rdma_pkey: 0x0000 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:12.807 Perform nvmf subsystem discovery via RPC 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.807 [ 00:13:12.807 { 00:13:12.807 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:12.807 "subtype": "Discovery", 00:13:12.807 "listen_addresses": [ 00:13:12.807 { 00:13:12.807 "trtype": "RDMA", 00:13:12.807 "adrfam": "IPv4", 00:13:12.807 "traddr": "192.168.100.8", 00:13:12.807 "trsvcid": "4420" 00:13:12.807 } 00:13:12.807 ], 00:13:12.807 "allow_any_host": true, 00:13:12.807 "hosts": [] 00:13:12.807 }, 00:13:12.807 { 00:13:12.807 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:12.807 "subtype": "NVMe", 00:13:12.807 "listen_addresses": [ 00:13:12.807 { 00:13:12.807 "trtype": "RDMA", 00:13:12.807 "adrfam": "IPv4", 00:13:12.807 "traddr": "192.168.100.8", 00:13:12.807 "trsvcid": "4420" 00:13:12.807 } 00:13:12.807 ], 00:13:12.807 "allow_any_host": true, 00:13:12.807 "hosts": [], 00:13:12.807 "serial_number": "SPDK00000000000001", 00:13:12.807 "model_number": "SPDK bdev Controller", 00:13:12.807 "max_namespaces": 32, 00:13:12.807 "min_cntlid": 1, 00:13:12.807 "max_cntlid": 65519, 00:13:12.807 "namespaces": [ 00:13:12.807 { 00:13:12.807 "nsid": 1, 00:13:12.807 "bdev_name": "Null1", 00:13:12.807 "name": "Null1", 00:13:12.807 "nguid": "9BF55A19A6784B05AF176257AC1A3C79", 00:13:12.807 "uuid": "9bf55a19-a678-4b05-af17-6257ac1a3c79" 00:13:12.807 } 00:13:12.807 ] 00:13:12.807 }, 00:13:12.807 { 00:13:12.807 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:12.807 "subtype": "NVMe", 00:13:12.807 "listen_addresses": [ 00:13:12.807 { 00:13:12.807 "trtype": "RDMA", 00:13:12.807 "adrfam": "IPv4", 00:13:12.807 "traddr": "192.168.100.8", 00:13:12.807 "trsvcid": "4420" 00:13:12.807 } 00:13:12.807 ], 00:13:12.807 "allow_any_host": true, 00:13:12.807 "hosts": [], 00:13:12.807 "serial_number": "SPDK00000000000002", 00:13:12.807 "model_number": "SPDK bdev Controller", 00:13:12.807 "max_namespaces": 32, 00:13:12.807 "min_cntlid": 1, 00:13:12.807 "max_cntlid": 65519, 00:13:12.807 "namespaces": [ 00:13:12.807 { 00:13:12.807 "nsid": 1, 00:13:12.807 "bdev_name": "Null2", 00:13:12.807 "name": "Null2", 00:13:12.807 "nguid": "591525C1FAD84929A77B8A789C1F1B47", 00:13:12.807 "uuid": "591525c1-fad8-4929-a77b-8a789c1f1b47" 00:13:12.807 } 00:13:12.807 ] 00:13:12.807 }, 00:13:12.807 { 00:13:12.807 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:12.807 "subtype": "NVMe", 00:13:12.807 "listen_addresses": [ 00:13:12.807 { 00:13:12.807 "trtype": "RDMA", 00:13:12.807 "adrfam": "IPv4", 00:13:12.807 "traddr": "192.168.100.8", 00:13:12.807 "trsvcid": "4420" 00:13:12.807 } 00:13:12.807 ], 00:13:12.807 "allow_any_host": true, 00:13:12.807 "hosts": [], 00:13:12.807 "serial_number": "SPDK00000000000003", 00:13:12.807 "model_number": "SPDK bdev Controller", 00:13:12.807 "max_namespaces": 32, 00:13:12.807 "min_cntlid": 1, 00:13:12.807 "max_cntlid": 65519, 00:13:12.807 "namespaces": [ 00:13:12.807 { 00:13:12.807 "nsid": 1, 00:13:12.807 "bdev_name": "Null3", 00:13:12.807 "name": "Null3", 00:13:12.807 "nguid": "5AE73E37A71346DEB5D2005F7882950D", 00:13:12.807 "uuid": "5ae73e37-a713-46de-b5d2-005f7882950d" 00:13:12.807 } 00:13:12.807 ] 00:13:12.807 }, 00:13:12.807 { 00:13:12.807 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:12.807 "subtype": "NVMe", 00:13:12.807 "listen_addresses": [ 00:13:12.807 { 00:13:12.807 "trtype": "RDMA", 00:13:12.807 "adrfam": "IPv4", 00:13:12.807 "traddr": "192.168.100.8", 00:13:12.807 "trsvcid": "4420" 00:13:12.807 } 00:13:12.807 ], 00:13:12.807 "allow_any_host": true, 00:13:12.807 "hosts": [], 00:13:12.807 "serial_number": "SPDK00000000000004", 00:13:12.807 "model_number": "SPDK bdev Controller", 00:13:12.807 "max_namespaces": 32, 00:13:12.807 "min_cntlid": 1, 00:13:12.807 "max_cntlid": 65519, 00:13:12.807 "namespaces": [ 00:13:12.807 { 00:13:12.807 "nsid": 1, 00:13:12.807 "bdev_name": "Null4", 00:13:12.807 "name": "Null4", 00:13:12.807 "nguid": "06BA6E4229494059B051FFBFF7A2B5AB", 00:13:12.807 "uuid": "06ba6e42-2949-4059-b051-ffbff7a2b5ab" 00:13:12.807 } 00:13:12.807 ] 00:13:12.807 } 00:13:12.807 ] 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:12.807 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:12.808 rmmod nvme_rdma 00:13:12.808 rmmod nvme_fabrics 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1606287 ']' 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1606287 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1606287 ']' 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1606287 00:13:12.808 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:13.066 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.066 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1606287 00:13:13.066 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.066 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.066 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1606287' 00:13:13.066 killing process with pid 1606287 00:13:13.066 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1606287 00:13:13.066 10:45:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1606287 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:14.971 00:13:14.971 real 0m12.409s 00:13:14.971 user 0m13.359s 00:13:14.971 sys 0m7.336s 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:14.971 ************************************ 00:13:14.971 END TEST nvmf_target_discovery 00:13:14.971 ************************************ 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.971 ************************************ 00:13:14.971 START TEST nvmf_referrals 00:13:14.971 ************************************ 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:14.971 * Looking for test storage... 00:13:14.971 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.971 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:14.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.972 --rc genhtml_branch_coverage=1 00:13:14.972 --rc genhtml_function_coverage=1 00:13:14.972 --rc genhtml_legend=1 00:13:14.972 --rc geninfo_all_blocks=1 00:13:14.972 --rc geninfo_unexecuted_blocks=1 00:13:14.972 00:13:14.972 ' 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:14.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.972 --rc genhtml_branch_coverage=1 00:13:14.972 --rc genhtml_function_coverage=1 00:13:14.972 --rc genhtml_legend=1 00:13:14.972 --rc geninfo_all_blocks=1 00:13:14.972 --rc geninfo_unexecuted_blocks=1 00:13:14.972 00:13:14.972 ' 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:14.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.972 --rc genhtml_branch_coverage=1 00:13:14.972 --rc genhtml_function_coverage=1 00:13:14.972 --rc genhtml_legend=1 00:13:14.972 --rc geninfo_all_blocks=1 00:13:14.972 --rc geninfo_unexecuted_blocks=1 00:13:14.972 00:13:14.972 ' 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:14.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.972 --rc genhtml_branch_coverage=1 00:13:14.972 --rc genhtml_function_coverage=1 00:13:14.972 --rc genhtml_legend=1 00:13:14.972 --rc geninfo_all_blocks=1 00:13:14.972 --rc geninfo_unexecuted_blocks=1 00:13:14.972 00:13:14.972 ' 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.972 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.973 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.973 10:45:53 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:23.102 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:23.102 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:23.102 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:23.103 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:23.103 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:23.103 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:23.103 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:23.103 altname enp217s0f0np0 00:13:23.103 altname ens818f0np0 00:13:23.103 inet 192.168.100.8/24 scope global mlx_0_0 00:13:23.103 valid_lft forever preferred_lft forever 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:23.103 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:23.103 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:23.103 altname enp217s0f1np1 00:13:23.103 altname ens818f1np1 00:13:23.103 inet 192.168.100.9/24 scope global mlx_0_1 00:13:23.103 valid_lft forever preferred_lft forever 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:23.103 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:23.104 192.168.100.9' 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:23.104 192.168.100.9' 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:23.104 192.168.100.9' 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1610945 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1610945 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1610945 ']' 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.104 10:46:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:23.363 [2024-11-28 10:46:02.046031] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:23.363 [2024-11-28 10:46:02.046127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.363 [2024-11-28 10:46:02.198682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.623 [2024-11-28 10:46:02.296610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.623 [2024-11-28 10:46:02.296660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.623 [2024-11-28 10:46:02.296672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.623 [2024-11-28 10:46:02.296685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.623 [2024-11-28 10:46:02.296694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.623 [2024-11-28 10:46:02.299208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.623 [2024-11-28 10:46:02.299324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.623 [2024-11-28 10:46:02.299385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.623 [2024-11-28 10:46:02.299394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.192 10:46:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.192 10:46:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:24.192 10:46:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:24.192 10:46:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.192 10:46:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.192 10:46:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.192 10:46:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:24.192 10:46:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.192 10:46:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.192 [2024-11-28 10:46:02.943986] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f4bd8d31940) succeed. 00:13:24.192 [2024-11-28 10:46:02.954131] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f4bd8bbd940) succeed. 00:13:24.451 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.451 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:13:24.451 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.451 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.451 [2024-11-28 10:46:03.216533] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:24.452 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:24.712 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:24.972 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:24.972 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:24.973 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:25.232 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:25.232 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:25.233 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:25.233 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:25.233 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:25.233 10:46:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:25.233 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:25.492 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:25.493 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:25.753 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:26.013 rmmod nvme_rdma 00:13:26.013 rmmod nvme_fabrics 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1610945 ']' 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1610945 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1610945 ']' 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1610945 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1610945 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1610945' 00:13:26.013 killing process with pid 1610945 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1610945 00:13:26.013 10:46:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1610945 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:27.922 00:13:27.922 real 0m12.930s 00:13:27.922 user 0m17.882s 00:13:27.922 sys 0m7.396s 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:27.922 ************************************ 00:13:27.922 END TEST nvmf_referrals 00:13:27.922 ************************************ 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.922 ************************************ 00:13:27.922 START TEST nvmf_connect_disconnect 00:13:27.922 ************************************ 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:13:27.922 * Looking for test storage... 00:13:27.922 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.922 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.923 --rc genhtml_branch_coverage=1 00:13:27.923 --rc genhtml_function_coverage=1 00:13:27.923 --rc genhtml_legend=1 00:13:27.923 --rc geninfo_all_blocks=1 00:13:27.923 --rc geninfo_unexecuted_blocks=1 00:13:27.923 00:13:27.923 ' 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.923 --rc genhtml_branch_coverage=1 00:13:27.923 --rc genhtml_function_coverage=1 00:13:27.923 --rc genhtml_legend=1 00:13:27.923 --rc geninfo_all_blocks=1 00:13:27.923 --rc geninfo_unexecuted_blocks=1 00:13:27.923 00:13:27.923 ' 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.923 --rc genhtml_branch_coverage=1 00:13:27.923 --rc genhtml_function_coverage=1 00:13:27.923 --rc genhtml_legend=1 00:13:27.923 --rc geninfo_all_blocks=1 00:13:27.923 --rc geninfo_unexecuted_blocks=1 00:13:27.923 00:13:27.923 ' 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.923 --rc genhtml_branch_coverage=1 00:13:27.923 --rc genhtml_function_coverage=1 00:13:27.923 --rc genhtml_legend=1 00:13:27.923 --rc geninfo_all_blocks=1 00:13:27.923 --rc geninfo_unexecuted_blocks=1 00:13:27.923 00:13:27.923 ' 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.923 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.924 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:27.924 10:46:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.052 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:36.053 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:36.053 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:36.053 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:36.053 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:36.053 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:36.054 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:36.054 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:36.054 altname enp217s0f0np0 00:13:36.054 altname ens818f0np0 00:13:36.054 inet 192.168.100.8/24 scope global mlx_0_0 00:13:36.054 valid_lft forever preferred_lft forever 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:36.054 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:36.054 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:36.054 altname enp217s0f1np1 00:13:36.054 altname ens818f1np1 00:13:36.054 inet 192.168.100.9/24 scope global mlx_0_1 00:13:36.054 valid_lft forever preferred_lft forever 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:36.054 192.168.100.9' 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:36.054 192.168.100.9' 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:36.054 192.168.100.9' 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:13:36.054 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1615824 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1615824 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1615824 ']' 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.055 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.314 [2024-11-28 10:46:14.971236] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:36.314 [2024-11-28 10:46:14.971330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.314 [2024-11-28 10:46:15.128409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.574 [2024-11-28 10:46:15.232097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.574 [2024-11-28 10:46:15.232146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.574 [2024-11-28 10:46:15.232161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.574 [2024-11-28 10:46:15.232174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.574 [2024-11-28 10:46:15.232184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.574 [2024-11-28 10:46:15.234666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.574 [2024-11-28 10:46:15.234740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.574 [2024-11-28 10:46:15.234798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.574 [2024-11-28 10:46:15.234807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.141 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.141 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:37.141 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.141 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:37.141 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.141 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.141 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:37.141 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.141 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.141 [2024-11-28 10:46:15.832181] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:37.141 [2024-11-28 10:46:15.872245] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fae847b3940) succeed. 00:13:37.141 [2024-11-28 10:46:15.881690] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fae8476d940) succeed. 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.399 [2024-11-28 10:46:16.124725] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:37.399 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:40.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:39.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:49.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:53.024 rmmod nvme_rdma 00:18:53.024 rmmod nvme_fabrics 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1615824 ']' 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1615824 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1615824 ']' 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1615824 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615824 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615824' 00:18:53.024 killing process with pid 1615824 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1615824 00:18:53.024 10:51:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1615824 00:18:54.408 10:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:54.409 10:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:54.409 00:18:54.409 real 5m26.382s 00:18:54.409 user 21m5.977s 00:18:54.409 sys 0m19.698s 00:18:54.409 10:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.409 10:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:54.409 ************************************ 00:18:54.409 END TEST nvmf_connect_disconnect 00:18:54.409 ************************************ 00:18:54.409 10:51:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:18:54.409 10:51:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.409 10:51:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.409 10:51:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:54.409 ************************************ 00:18:54.409 START TEST nvmf_multitarget 00:18:54.409 ************************************ 00:18:54.409 10:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:18:54.409 * Looking for test storage... 00:18:54.409 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:54.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.409 --rc genhtml_branch_coverage=1 00:18:54.409 --rc genhtml_function_coverage=1 00:18:54.409 --rc genhtml_legend=1 00:18:54.409 --rc geninfo_all_blocks=1 00:18:54.409 --rc geninfo_unexecuted_blocks=1 00:18:54.409 00:18:54.409 ' 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:54.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.409 --rc genhtml_branch_coverage=1 00:18:54.409 --rc genhtml_function_coverage=1 00:18:54.409 --rc genhtml_legend=1 00:18:54.409 --rc geninfo_all_blocks=1 00:18:54.409 --rc geninfo_unexecuted_blocks=1 00:18:54.409 00:18:54.409 ' 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:54.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.409 --rc genhtml_branch_coverage=1 00:18:54.409 --rc genhtml_function_coverage=1 00:18:54.409 --rc genhtml_legend=1 00:18:54.409 --rc geninfo_all_blocks=1 00:18:54.409 --rc geninfo_unexecuted_blocks=1 00:18:54.409 00:18:54.409 ' 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:54.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.409 --rc genhtml_branch_coverage=1 00:18:54.409 --rc genhtml_function_coverage=1 00:18:54.409 --rc genhtml_legend=1 00:18:54.409 --rc geninfo_all_blocks=1 00:18:54.409 --rc geninfo_unexecuted_blocks=1 00:18:54.409 00:18:54.409 ' 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.409 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.410 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:18:54.410 10:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:02.554 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:02.554 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:02.554 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:02.554 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:02.554 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:02.555 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:02.816 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:02.816 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:02.816 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:02.816 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:02.816 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:02.816 altname enp217s0f0np0 00:19:02.816 altname ens818f0np0 00:19:02.816 inet 192.168.100.8/24 scope global mlx_0_0 00:19:02.816 valid_lft forever preferred_lft forever 00:19:02.816 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:02.816 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:02.816 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:02.817 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:02.817 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:02.817 altname enp217s0f1np1 00:19:02.817 altname ens818f1np1 00:19:02.817 inet 192.168.100.9/24 scope global mlx_0_1 00:19:02.817 valid_lft forever preferred_lft forever 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:02.817 192.168.100.9' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:02.817 192.168.100.9' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:02.817 192.168.100.9' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1675921 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1675921 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1675921 ']' 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.817 10:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:02.817 [2024-11-28 10:51:41.673433] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:02.817 [2024-11-28 10:51:41.673533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.078 [2024-11-28 10:51:41.829161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:03.078 [2024-11-28 10:51:41.927602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.078 [2024-11-28 10:51:41.927656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.078 [2024-11-28 10:51:41.927668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.078 [2024-11-28 10:51:41.927681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.078 [2024-11-28 10:51:41.927690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.078 [2024-11-28 10:51:41.930042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.078 [2024-11-28 10:51:41.930116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.078 [2024-11-28 10:51:41.930175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.078 [2024-11-28 10:51:41.930184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:03.649 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.649 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:19:03.649 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:03.649 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.649 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:03.649 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.649 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:03.649 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:03.649 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:19:03.909 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:19:03.909 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:19:03.909 "nvmf_tgt_1" 00:19:03.909 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:19:04.169 "nvmf_tgt_2" 00:19:04.169 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:04.169 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:19:04.169 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:19:04.169 10:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:19:04.169 true 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:19:04.428 true 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:04.428 rmmod nvme_rdma 00:19:04.428 rmmod nvme_fabrics 00:19:04.428 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1675921 ']' 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1675921 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1675921 ']' 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1675921 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1675921 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1675921' 00:19:04.687 killing process with pid 1675921 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1675921 00:19:04.687 10:51:43 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1675921 00:19:05.625 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:05.625 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:05.625 00:19:05.625 real 0m11.537s 00:19:05.625 user 0m12.859s 00:19:05.625 sys 0m7.131s 00:19:05.625 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.625 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:05.625 ************************************ 00:19:05.625 END TEST nvmf_multitarget 00:19:05.625 ************************************ 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.885 ************************************ 00:19:05.885 START TEST nvmf_rpc 00:19:05.885 ************************************ 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:19:05.885 * Looking for test storage... 00:19:05.885 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.885 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.886 --rc genhtml_branch_coverage=1 00:19:05.886 --rc genhtml_function_coverage=1 00:19:05.886 --rc genhtml_legend=1 00:19:05.886 --rc geninfo_all_blocks=1 00:19:05.886 --rc geninfo_unexecuted_blocks=1 00:19:05.886 00:19:05.886 ' 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.886 --rc genhtml_branch_coverage=1 00:19:05.886 --rc genhtml_function_coverage=1 00:19:05.886 --rc genhtml_legend=1 00:19:05.886 --rc geninfo_all_blocks=1 00:19:05.886 --rc geninfo_unexecuted_blocks=1 00:19:05.886 00:19:05.886 ' 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.886 --rc genhtml_branch_coverage=1 00:19:05.886 --rc genhtml_function_coverage=1 00:19:05.886 --rc genhtml_legend=1 00:19:05.886 --rc geninfo_all_blocks=1 00:19:05.886 --rc geninfo_unexecuted_blocks=1 00:19:05.886 00:19:05.886 ' 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.886 --rc genhtml_branch_coverage=1 00:19:05.886 --rc genhtml_function_coverage=1 00:19:05.886 --rc genhtml_legend=1 00:19:05.886 --rc geninfo_all_blocks=1 00:19:05.886 --rc geninfo_unexecuted_blocks=1 00:19:05.886 00:19:05.886 ' 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.886 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:06.146 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:19:06.146 10:51:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:16.142 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:16.143 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:16.143 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:16.143 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:16.143 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:16.143 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:16.143 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.143 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:16.143 altname enp217s0f0np0 00:19:16.143 altname ens818f0np0 00:19:16.144 inet 192.168.100.8/24 scope global mlx_0_0 00:19:16.144 valid_lft forever preferred_lft forever 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:16.144 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.144 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:16.144 altname enp217s0f1np1 00:19:16.144 altname ens818f1np1 00:19:16.144 inet 192.168.100.9/24 scope global mlx_0_1 00:19:16.144 valid_lft forever preferred_lft forever 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:16.144 192.168.100.9' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:16.144 192.168.100.9' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:16.144 192.168.100.9' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1680639 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1680639 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1680639 ']' 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.144 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.145 10:51:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.145 [2024-11-28 10:51:53.535683] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:16.145 [2024-11-28 10:51:53.535799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.145 [2024-11-28 10:51:53.688874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.145 [2024-11-28 10:51:53.788023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.145 [2024-11-28 10:51:53.788074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.145 [2024-11-28 10:51:53.788087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.145 [2024-11-28 10:51:53.788100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.145 [2024-11-28 10:51:53.788110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.145 [2024-11-28 10:51:53.790657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.145 [2024-11-28 10:51:53.790737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.145 [2024-11-28 10:51:53.790800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.145 [2024-11-28 10:51:53.790808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:19:16.145 "tick_rate": 2500000000, 00:19:16.145 "poll_groups": [ 00:19:16.145 { 00:19:16.145 "name": "nvmf_tgt_poll_group_000", 00:19:16.145 "admin_qpairs": 0, 00:19:16.145 "io_qpairs": 0, 00:19:16.145 "current_admin_qpairs": 0, 00:19:16.145 "current_io_qpairs": 0, 00:19:16.145 "pending_bdev_io": 0, 00:19:16.145 "completed_nvme_io": 0, 00:19:16.145 "transports": [] 00:19:16.145 }, 00:19:16.145 { 00:19:16.145 "name": "nvmf_tgt_poll_group_001", 00:19:16.145 "admin_qpairs": 0, 00:19:16.145 "io_qpairs": 0, 00:19:16.145 "current_admin_qpairs": 0, 00:19:16.145 "current_io_qpairs": 0, 00:19:16.145 "pending_bdev_io": 0, 00:19:16.145 "completed_nvme_io": 0, 00:19:16.145 "transports": [] 00:19:16.145 }, 00:19:16.145 { 00:19:16.145 "name": "nvmf_tgt_poll_group_002", 00:19:16.145 "admin_qpairs": 0, 00:19:16.145 "io_qpairs": 0, 00:19:16.145 "current_admin_qpairs": 0, 00:19:16.145 "current_io_qpairs": 0, 00:19:16.145 "pending_bdev_io": 0, 00:19:16.145 "completed_nvme_io": 0, 00:19:16.145 "transports": [] 00:19:16.145 }, 00:19:16.145 { 00:19:16.145 "name": "nvmf_tgt_poll_group_003", 00:19:16.145 "admin_qpairs": 0, 00:19:16.145 "io_qpairs": 0, 00:19:16.145 "current_admin_qpairs": 0, 00:19:16.145 "current_io_qpairs": 0, 00:19:16.145 "pending_bdev_io": 0, 00:19:16.145 "completed_nvme_io": 0, 00:19:16.145 "transports": [] 00:19:16.145 } 00:19:16.145 ] 00:19:16.145 }' 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.145 [2024-11-28 10:51:54.532662] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f3c617bd940) succeed. 00:19:16.145 [2024-11-28 10:51:54.542503] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f3c61779940) succeed. 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.145 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:19:16.145 "tick_rate": 2500000000, 00:19:16.145 "poll_groups": [ 00:19:16.145 { 00:19:16.145 "name": "nvmf_tgt_poll_group_000", 00:19:16.145 "admin_qpairs": 0, 00:19:16.145 "io_qpairs": 0, 00:19:16.145 "current_admin_qpairs": 0, 00:19:16.145 "current_io_qpairs": 0, 00:19:16.145 "pending_bdev_io": 0, 00:19:16.145 "completed_nvme_io": 0, 00:19:16.145 "transports": [ 00:19:16.145 { 00:19:16.145 "trtype": "RDMA", 00:19:16.145 "pending_data_buffer": 0, 00:19:16.145 "devices": [ 00:19:16.145 { 00:19:16.145 "name": "mlx5_0", 00:19:16.145 "polls": 30944, 00:19:16.145 "idle_polls": 30944, 00:19:16.145 "completions": 0, 00:19:16.145 "requests": 0, 00:19:16.145 "request_latency": 0, 00:19:16.145 "pending_free_request": 0, 00:19:16.145 "pending_rdma_read": 0, 00:19:16.145 "pending_rdma_write": 0, 00:19:16.145 "pending_rdma_send": 0, 00:19:16.145 "total_send_wrs": 0, 00:19:16.145 "send_doorbell_updates": 0, 00:19:16.145 "total_recv_wrs": 4096, 00:19:16.145 "recv_doorbell_updates": 1 00:19:16.145 }, 00:19:16.145 { 00:19:16.145 "name": "mlx5_1", 00:19:16.145 "polls": 30944, 00:19:16.145 "idle_polls": 30944, 00:19:16.145 "completions": 0, 00:19:16.145 "requests": 0, 00:19:16.145 "request_latency": 0, 00:19:16.145 "pending_free_request": 0, 00:19:16.145 "pending_rdma_read": 0, 00:19:16.145 "pending_rdma_write": 0, 00:19:16.145 "pending_rdma_send": 0, 00:19:16.145 "total_send_wrs": 0, 00:19:16.146 "send_doorbell_updates": 0, 00:19:16.146 "total_recv_wrs": 4096, 00:19:16.146 "recv_doorbell_updates": 1 00:19:16.146 } 00:19:16.146 ] 00:19:16.146 } 00:19:16.146 ] 00:19:16.146 }, 00:19:16.146 { 00:19:16.146 "name": "nvmf_tgt_poll_group_001", 00:19:16.146 "admin_qpairs": 0, 00:19:16.146 "io_qpairs": 0, 00:19:16.146 "current_admin_qpairs": 0, 00:19:16.146 "current_io_qpairs": 0, 00:19:16.146 "pending_bdev_io": 0, 00:19:16.146 "completed_nvme_io": 0, 00:19:16.146 "transports": [ 00:19:16.146 { 00:19:16.146 "trtype": "RDMA", 00:19:16.146 "pending_data_buffer": 0, 00:19:16.146 "devices": [ 00:19:16.146 { 00:19:16.146 "name": "mlx5_0", 00:19:16.146 "polls": 19631, 00:19:16.146 "idle_polls": 19631, 00:19:16.146 "completions": 0, 00:19:16.146 "requests": 0, 00:19:16.146 "request_latency": 0, 00:19:16.146 "pending_free_request": 0, 00:19:16.146 "pending_rdma_read": 0, 00:19:16.146 "pending_rdma_write": 0, 00:19:16.146 "pending_rdma_send": 0, 00:19:16.146 "total_send_wrs": 0, 00:19:16.146 "send_doorbell_updates": 0, 00:19:16.146 "total_recv_wrs": 4096, 00:19:16.146 "recv_doorbell_updates": 1 00:19:16.146 }, 00:19:16.146 { 00:19:16.146 "name": "mlx5_1", 00:19:16.146 "polls": 19631, 00:19:16.146 "idle_polls": 19631, 00:19:16.146 "completions": 0, 00:19:16.146 "requests": 0, 00:19:16.146 "request_latency": 0, 00:19:16.146 "pending_free_request": 0, 00:19:16.146 "pending_rdma_read": 0, 00:19:16.146 "pending_rdma_write": 0, 00:19:16.146 "pending_rdma_send": 0, 00:19:16.146 "total_send_wrs": 0, 00:19:16.146 "send_doorbell_updates": 0, 00:19:16.146 "total_recv_wrs": 4096, 00:19:16.146 "recv_doorbell_updates": 1 00:19:16.146 } 00:19:16.146 ] 00:19:16.146 } 00:19:16.146 ] 00:19:16.146 }, 00:19:16.146 { 00:19:16.146 "name": "nvmf_tgt_poll_group_002", 00:19:16.146 "admin_qpairs": 0, 00:19:16.146 "io_qpairs": 0, 00:19:16.146 "current_admin_qpairs": 0, 00:19:16.146 "current_io_qpairs": 0, 00:19:16.146 "pending_bdev_io": 0, 00:19:16.146 "completed_nvme_io": 0, 00:19:16.146 "transports": [ 00:19:16.146 { 00:19:16.146 "trtype": "RDMA", 00:19:16.146 "pending_data_buffer": 0, 00:19:16.146 "devices": [ 00:19:16.146 { 00:19:16.146 "name": "mlx5_0", 00:19:16.146 "polls": 10269, 00:19:16.146 "idle_polls": 10269, 00:19:16.146 "completions": 0, 00:19:16.146 "requests": 0, 00:19:16.146 "request_latency": 0, 00:19:16.146 "pending_free_request": 0, 00:19:16.146 "pending_rdma_read": 0, 00:19:16.146 "pending_rdma_write": 0, 00:19:16.146 "pending_rdma_send": 0, 00:19:16.146 "total_send_wrs": 0, 00:19:16.146 "send_doorbell_updates": 0, 00:19:16.146 "total_recv_wrs": 4096, 00:19:16.146 "recv_doorbell_updates": 1 00:19:16.146 }, 00:19:16.146 { 00:19:16.146 "name": "mlx5_1", 00:19:16.146 "polls": 10269, 00:19:16.146 "idle_polls": 10269, 00:19:16.146 "completions": 0, 00:19:16.146 "requests": 0, 00:19:16.146 "request_latency": 0, 00:19:16.146 "pending_free_request": 0, 00:19:16.146 "pending_rdma_read": 0, 00:19:16.146 "pending_rdma_write": 0, 00:19:16.146 "pending_rdma_send": 0, 00:19:16.146 "total_send_wrs": 0, 00:19:16.146 "send_doorbell_updates": 0, 00:19:16.146 "total_recv_wrs": 4096, 00:19:16.146 "recv_doorbell_updates": 1 00:19:16.146 } 00:19:16.146 ] 00:19:16.146 } 00:19:16.146 ] 00:19:16.146 }, 00:19:16.146 { 00:19:16.146 "name": "nvmf_tgt_poll_group_003", 00:19:16.146 "admin_qpairs": 0, 00:19:16.146 "io_qpairs": 0, 00:19:16.146 "current_admin_qpairs": 0, 00:19:16.146 "current_io_qpairs": 0, 00:19:16.146 "pending_bdev_io": 0, 00:19:16.146 "completed_nvme_io": 0, 00:19:16.146 "transports": [ 00:19:16.146 { 00:19:16.146 "trtype": "RDMA", 00:19:16.146 "pending_data_buffer": 0, 00:19:16.146 "devices": [ 00:19:16.146 { 00:19:16.146 "name": "mlx5_0", 00:19:16.146 "polls": 790, 00:19:16.146 "idle_polls": 790, 00:19:16.146 "completions": 0, 00:19:16.146 "requests": 0, 00:19:16.146 "request_latency": 0, 00:19:16.146 "pending_free_request": 0, 00:19:16.146 "pending_rdma_read": 0, 00:19:16.146 "pending_rdma_write": 0, 00:19:16.146 "pending_rdma_send": 0, 00:19:16.146 "total_send_wrs": 0, 00:19:16.146 "send_doorbell_updates": 0, 00:19:16.146 "total_recv_wrs": 4096, 00:19:16.146 "recv_doorbell_updates": 1 00:19:16.146 }, 00:19:16.146 { 00:19:16.146 "name": "mlx5_1", 00:19:16.146 "polls": 790, 00:19:16.146 "idle_polls": 790, 00:19:16.146 "completions": 0, 00:19:16.146 "requests": 0, 00:19:16.146 "request_latency": 0, 00:19:16.146 "pending_free_request": 0, 00:19:16.146 "pending_rdma_read": 0, 00:19:16.146 "pending_rdma_write": 0, 00:19:16.146 "pending_rdma_send": 0, 00:19:16.146 "total_send_wrs": 0, 00:19:16.146 "send_doorbell_updates": 0, 00:19:16.146 "total_recv_wrs": 4096, 00:19:16.146 "recv_doorbell_updates": 1 00:19:16.146 } 00:19:16.146 ] 00:19:16.146 } 00:19:16.146 ] 00:19:16.146 } 00:19:16.146 ] 00:19:16.146 }' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:19:16.146 10:51:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.405 Malloc1 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.405 [2024-11-28 10:51:55.186541] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:19:16.405 [2024-11-28 10:51:55.239200] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:19:16.405 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:16.405 could not add new controller: failed to write to nvme-fabrics device 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:16.405 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:16.406 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:16.406 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:16.406 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.406 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.663 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.663 10:51:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:17.624 10:51:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:19:17.624 10:51:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:17.624 10:51:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:17.624 10:51:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:17.624 10:51:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:19.601 10:51:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:19.601 10:51:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:19.601 10:51:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:19.601 10:51:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:19.601 10:51:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:19.601 10:51:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:19.601 10:51:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:20.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:20.540 [2024-11-28 10:51:59.320863] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:19:20.540 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:20.540 could not add new controller: failed to write to nvme-fabrics device 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.540 10:51:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:21.477 10:52:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:19:21.478 10:52:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:21.478 10:52:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:21.478 10:52:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:21.478 10:52:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:24.011 10:52:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:24.011 10:52:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:24.011 10:52:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:24.011 10:52:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:24.011 10:52:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:24.011 10:52:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:24.011 10:52:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:24.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.580 [2024-11-28 10:52:03.419300] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.580 10:52:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:25.959 10:52:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:25.959 10:52:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:25.959 10:52:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:25.959 10:52:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:25.959 10:52:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:27.864 10:52:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:27.864 10:52:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:27.864 10:52:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:27.864 10:52:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:27.864 10:52:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:27.864 10:52:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:27.864 10:52:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:28.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.803 [2024-11-28 10:52:07.469203] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.803 10:52:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:29.740 10:52:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:29.741 10:52:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:29.741 10:52:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:29.741 10:52:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:29.741 10:52:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:31.647 10:52:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:31.647 10:52:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:31.647 10:52:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:31.647 10:52:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:31.647 10:52:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:31.647 10:52:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:31.647 10:52:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:32.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:32.604 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:32.604 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:32.604 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:32.604 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.604 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.604 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:32.864 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.865 [2024-11-28 10:52:11.530726] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.865 10:52:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:33.803 10:52:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:33.803 10:52:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:33.803 10:52:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.803 10:52:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:33.803 10:52:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:35.710 10:52:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:35.710 10:52:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:35.710 10:52:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:35.710 10:52:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:35.710 10:52:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.710 10:52:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:35.710 10:52:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:36.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.647 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:36.647 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.647 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.647 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:36.647 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.647 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.905 [2024-11-28 10:52:15.556630] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.905 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.906 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:36.906 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.906 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.906 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.906 10:52:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:37.842 10:52:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:37.842 10:52:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:37.842 10:52:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:37.842 10:52:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:37.842 10:52:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:39.748 10:52:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:39.748 10:52:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:39.748 10:52:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:39.748 10:52:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:39.748 10:52:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:39.748 10:52:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:39.748 10:52:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:40.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:40.685 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:40.685 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:40.685 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:40.685 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.945 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.946 [2024-11-28 10:52:19.619311] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.946 10:52:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:41.883 10:52:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:41.883 10:52:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:41.883 10:52:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:41.883 10:52:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:41.883 10:52:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:43.789 10:52:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:43.789 10:52:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:43.789 10:52:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:43.789 10:52:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:43.789 10:52:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:43.789 10:52:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:43.789 10:52:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:45.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 [2024-11-28 10:52:23.693845] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 [2024-11-28 10:52:23.750016] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:45.168 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 [2024-11-28 10:52:23.802247] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 [2024-11-28 10:52:23.854429] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 [2024-11-28 10:52:23.906629] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.169 10:52:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:19:45.169 "tick_rate": 2500000000, 00:19:45.169 "poll_groups": [ 00:19:45.169 { 00:19:45.169 "name": "nvmf_tgt_poll_group_000", 00:19:45.169 "admin_qpairs": 2, 00:19:45.169 "io_qpairs": 27, 00:19:45.169 "current_admin_qpairs": 0, 00:19:45.169 "current_io_qpairs": 0, 00:19:45.169 "pending_bdev_io": 0, 00:19:45.169 "completed_nvme_io": 77, 00:19:45.169 "transports": [ 00:19:45.169 { 00:19:45.169 "trtype": "RDMA", 00:19:45.169 "pending_data_buffer": 0, 00:19:45.169 "devices": [ 00:19:45.169 { 00:19:45.169 "name": "mlx5_0", 00:19:45.169 "polls": 3418553, 00:19:45.169 "idle_polls": 3418306, 00:19:45.169 "completions": 269, 00:19:45.169 "requests": 134, 00:19:45.169 "request_latency": 29035818, 00:19:45.169 "pending_free_request": 0, 00:19:45.169 "pending_rdma_read": 0, 00:19:45.169 "pending_rdma_write": 0, 00:19:45.169 "pending_rdma_send": 0, 00:19:45.169 "total_send_wrs": 211, 00:19:45.169 "send_doorbell_updates": 123, 00:19:45.169 "total_recv_wrs": 4230, 00:19:45.169 "recv_doorbell_updates": 123 00:19:45.169 }, 00:19:45.169 { 00:19:45.169 "name": "mlx5_1", 00:19:45.169 "polls": 3418553, 00:19:45.169 "idle_polls": 3418553, 00:19:45.169 "completions": 0, 00:19:45.169 "requests": 0, 00:19:45.169 "request_latency": 0, 00:19:45.169 "pending_free_request": 0, 00:19:45.169 "pending_rdma_read": 0, 00:19:45.169 "pending_rdma_write": 0, 00:19:45.169 "pending_rdma_send": 0, 00:19:45.169 "total_send_wrs": 0, 00:19:45.170 "send_doorbell_updates": 0, 00:19:45.170 "total_recv_wrs": 4096, 00:19:45.170 "recv_doorbell_updates": 1 00:19:45.170 } 00:19:45.170 ] 00:19:45.170 } 00:19:45.170 ] 00:19:45.170 }, 00:19:45.170 { 00:19:45.170 "name": "nvmf_tgt_poll_group_001", 00:19:45.170 "admin_qpairs": 2, 00:19:45.170 "io_qpairs": 26, 00:19:45.170 "current_admin_qpairs": 0, 00:19:45.170 "current_io_qpairs": 0, 00:19:45.170 "pending_bdev_io": 0, 00:19:45.170 "completed_nvme_io": 77, 00:19:45.170 "transports": [ 00:19:45.170 { 00:19:45.170 "trtype": "RDMA", 00:19:45.170 "pending_data_buffer": 0, 00:19:45.170 "devices": [ 00:19:45.170 { 00:19:45.170 "name": "mlx5_0", 00:19:45.170 "polls": 3341247, 00:19:45.170 "idle_polls": 3341005, 00:19:45.170 "completions": 262, 00:19:45.170 "requests": 131, 00:19:45.170 "request_latency": 29481814, 00:19:45.170 "pending_free_request": 0, 00:19:45.170 "pending_rdma_read": 0, 00:19:45.170 "pending_rdma_write": 0, 00:19:45.170 "pending_rdma_send": 0, 00:19:45.170 "total_send_wrs": 207, 00:19:45.170 "send_doorbell_updates": 118, 00:19:45.170 "total_recv_wrs": 4227, 00:19:45.170 "recv_doorbell_updates": 119 00:19:45.170 }, 00:19:45.170 { 00:19:45.170 "name": "mlx5_1", 00:19:45.170 "polls": 3341247, 00:19:45.170 "idle_polls": 3341247, 00:19:45.170 "completions": 0, 00:19:45.170 "requests": 0, 00:19:45.170 "request_latency": 0, 00:19:45.170 "pending_free_request": 0, 00:19:45.170 "pending_rdma_read": 0, 00:19:45.170 "pending_rdma_write": 0, 00:19:45.170 "pending_rdma_send": 0, 00:19:45.170 "total_send_wrs": 0, 00:19:45.170 "send_doorbell_updates": 0, 00:19:45.170 "total_recv_wrs": 4096, 00:19:45.170 "recv_doorbell_updates": 1 00:19:45.170 } 00:19:45.170 ] 00:19:45.170 } 00:19:45.170 ] 00:19:45.170 }, 00:19:45.170 { 00:19:45.170 "name": "nvmf_tgt_poll_group_002", 00:19:45.170 "admin_qpairs": 1, 00:19:45.170 "io_qpairs": 26, 00:19:45.170 "current_admin_qpairs": 0, 00:19:45.170 "current_io_qpairs": 0, 00:19:45.170 "pending_bdev_io": 0, 00:19:45.170 "completed_nvme_io": 174, 00:19:45.170 "transports": [ 00:19:45.170 { 00:19:45.170 "trtype": "RDMA", 00:19:45.170 "pending_data_buffer": 0, 00:19:45.170 "devices": [ 00:19:45.170 { 00:19:45.170 "name": "mlx5_0", 00:19:45.170 "polls": 3383042, 00:19:45.170 "idle_polls": 3382698, 00:19:45.170 "completions": 405, 00:19:45.170 "requests": 202, 00:19:45.170 "request_latency": 59985916, 00:19:45.170 "pending_free_request": 0, 00:19:45.170 "pending_rdma_read": 0, 00:19:45.170 "pending_rdma_write": 0, 00:19:45.170 "pending_rdma_send": 0, 00:19:45.170 "total_send_wrs": 364, 00:19:45.170 "send_doorbell_updates": 166, 00:19:45.170 "total_recv_wrs": 4298, 00:19:45.170 "recv_doorbell_updates": 166 00:19:45.170 }, 00:19:45.170 { 00:19:45.170 "name": "mlx5_1", 00:19:45.170 "polls": 3383042, 00:19:45.170 "idle_polls": 3383042, 00:19:45.170 "completions": 0, 00:19:45.170 "requests": 0, 00:19:45.170 "request_latency": 0, 00:19:45.170 "pending_free_request": 0, 00:19:45.170 "pending_rdma_read": 0, 00:19:45.170 "pending_rdma_write": 0, 00:19:45.170 "pending_rdma_send": 0, 00:19:45.170 "total_send_wrs": 0, 00:19:45.170 "send_doorbell_updates": 0, 00:19:45.170 "total_recv_wrs": 4096, 00:19:45.170 "recv_doorbell_updates": 1 00:19:45.170 } 00:19:45.170 ] 00:19:45.170 } 00:19:45.170 ] 00:19:45.170 }, 00:19:45.170 { 00:19:45.170 "name": "nvmf_tgt_poll_group_003", 00:19:45.170 "admin_qpairs": 2, 00:19:45.170 "io_qpairs": 26, 00:19:45.170 "current_admin_qpairs": 0, 00:19:45.170 "current_io_qpairs": 0, 00:19:45.170 "pending_bdev_io": 0, 00:19:45.170 "completed_nvme_io": 127, 00:19:45.170 "transports": [ 00:19:45.170 { 00:19:45.170 "trtype": "RDMA", 00:19:45.170 "pending_data_buffer": 0, 00:19:45.170 "devices": [ 00:19:45.170 { 00:19:45.170 "name": "mlx5_0", 00:19:45.170 "polls": 2565634, 00:19:45.170 "idle_polls": 2565322, 00:19:45.170 "completions": 360, 00:19:45.170 "requests": 180, 00:19:45.170 "request_latency": 45877210, 00:19:45.170 "pending_free_request": 0, 00:19:45.170 "pending_rdma_read": 0, 00:19:45.170 "pending_rdma_write": 0, 00:19:45.170 "pending_rdma_send": 0, 00:19:45.170 "total_send_wrs": 305, 00:19:45.170 "send_doorbell_updates": 156, 00:19:45.170 "total_recv_wrs": 4276, 00:19:45.170 "recv_doorbell_updates": 157 00:19:45.170 }, 00:19:45.170 { 00:19:45.170 "name": "mlx5_1", 00:19:45.170 "polls": 2565634, 00:19:45.170 "idle_polls": 2565634, 00:19:45.170 "completions": 0, 00:19:45.170 "requests": 0, 00:19:45.170 "request_latency": 0, 00:19:45.170 "pending_free_request": 0, 00:19:45.170 "pending_rdma_read": 0, 00:19:45.170 "pending_rdma_write": 0, 00:19:45.170 "pending_rdma_send": 0, 00:19:45.170 "total_send_wrs": 0, 00:19:45.170 "send_doorbell_updates": 0, 00:19:45.170 "total_recv_wrs": 4096, 00:19:45.170 "recv_doorbell_updates": 1 00:19:45.170 } 00:19:45.170 ] 00:19:45.170 } 00:19:45.170 ] 00:19:45.170 } 00:19:45.170 ] 00:19:45.170 }' 00:19:45.170 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:45.170 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:45.170 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:45.170 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:45.170 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1296 > 0 )) 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 164380758 > 0 )) 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:45.429 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:45.430 rmmod nvme_rdma 00:19:45.430 rmmod nvme_fabrics 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1680639 ']' 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1680639 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1680639 ']' 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1680639 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1680639 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1680639' 00:19:45.430 killing process with pid 1680639 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1680639 00:19:45.430 10:52:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1680639 00:19:47.339 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:47.339 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:47.339 00:19:47.339 real 0m41.582s 00:19:47.339 user 2m9.992s 00:19:47.339 sys 0m8.698s 00:19:47.339 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.339 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:47.339 ************************************ 00:19:47.339 END TEST nvmf_rpc 00:19:47.339 ************************************ 00:19:47.339 10:52:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:19:47.339 10:52:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.339 10:52:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.339 10:52:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.339 ************************************ 00:19:47.339 START TEST nvmf_invalid 00:19:47.339 ************************************ 00:19:47.339 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:19:47.599 * Looking for test storage... 00:19:47.599 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:19:47.599 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.600 --rc genhtml_branch_coverage=1 00:19:47.600 --rc genhtml_function_coverage=1 00:19:47.600 --rc genhtml_legend=1 00:19:47.600 --rc geninfo_all_blocks=1 00:19:47.600 --rc geninfo_unexecuted_blocks=1 00:19:47.600 00:19:47.600 ' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.600 --rc genhtml_branch_coverage=1 00:19:47.600 --rc genhtml_function_coverage=1 00:19:47.600 --rc genhtml_legend=1 00:19:47.600 --rc geninfo_all_blocks=1 00:19:47.600 --rc geninfo_unexecuted_blocks=1 00:19:47.600 00:19:47.600 ' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.600 --rc genhtml_branch_coverage=1 00:19:47.600 --rc genhtml_function_coverage=1 00:19:47.600 --rc genhtml_legend=1 00:19:47.600 --rc geninfo_all_blocks=1 00:19:47.600 --rc geninfo_unexecuted_blocks=1 00:19:47.600 00:19:47.600 ' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.600 --rc genhtml_branch_coverage=1 00:19:47.600 --rc genhtml_function_coverage=1 00:19:47.600 --rc genhtml_legend=1 00:19:47.600 --rc geninfo_all_blocks=1 00:19:47.600 --rc geninfo_unexecuted_blocks=1 00:19:47.600 00:19:47.600 ' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.600 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.600 10:52:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.729 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:55.730 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:55.730 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:55.730 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:55.730 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:55.730 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.730 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:55.730 altname enp217s0f0np0 00:19:55.730 altname ens818f0np0 00:19:55.730 inet 192.168.100.8/24 scope global mlx_0_0 00:19:55.730 valid_lft forever preferred_lft forever 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:55.730 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.730 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:55.730 altname enp217s0f1np1 00:19:55.730 altname ens818f1np1 00:19:55.730 inet 192.168.100.9/24 scope global mlx_0_1 00:19:55.730 valid_lft forever preferred_lft forever 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:55.730 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:55.731 192.168.100.9' 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:55.731 192.168.100.9' 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:55.731 192.168.100.9' 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:55.731 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1690338 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1690338 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1690338 ']' 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.990 10:52:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:55.990 [2024-11-28 10:52:34.713911] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:55.990 [2024-11-28 10:52:34.714005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.990 [2024-11-28 10:52:34.867895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.249 [2024-11-28 10:52:34.965954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.250 [2024-11-28 10:52:34.966001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.250 [2024-11-28 10:52:34.966013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.250 [2024-11-28 10:52:34.966025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.250 [2024-11-28 10:52:34.966035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.250 [2024-11-28 10:52:34.968662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.250 [2024-11-28 10:52:34.968734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.250 [2024-11-28 10:52:34.968794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.250 [2024-11-28 10:52:34.968803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.815 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.815 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:19:56.815 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:56.815 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:56.815 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:56.815 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.815 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:56.815 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10320 00:19:57.073 [2024-11-28 10:52:35.747126] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:57.073 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:19:57.073 { 00:19:57.073 "nqn": "nqn.2016-06.io.spdk:cnode10320", 00:19:57.073 "tgt_name": "foobar", 00:19:57.073 "method": "nvmf_create_subsystem", 00:19:57.073 "req_id": 1 00:19:57.073 } 00:19:57.073 Got JSON-RPC error response 00:19:57.073 response: 00:19:57.073 { 00:19:57.073 "code": -32603, 00:19:57.073 "message": "Unable to find target foobar" 00:19:57.073 }' 00:19:57.073 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:19:57.073 { 00:19:57.073 "nqn": "nqn.2016-06.io.spdk:cnode10320", 00:19:57.073 "tgt_name": "foobar", 00:19:57.073 "method": "nvmf_create_subsystem", 00:19:57.073 "req_id": 1 00:19:57.073 } 00:19:57.073 Got JSON-RPC error response 00:19:57.073 response: 00:19:57.073 { 00:19:57.073 "code": -32603, 00:19:57.073 "message": "Unable to find target foobar" 00:19:57.073 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:57.073 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:57.073 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21272 00:19:57.331 [2024-11-28 10:52:35.959886] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21272: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:57.331 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:19:57.331 { 00:19:57.331 "nqn": "nqn.2016-06.io.spdk:cnode21272", 00:19:57.331 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:57.331 "method": "nvmf_create_subsystem", 00:19:57.331 "req_id": 1 00:19:57.331 } 00:19:57.331 Got JSON-RPC error response 00:19:57.331 response: 00:19:57.331 { 00:19:57.331 "code": -32602, 00:19:57.331 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:57.331 }' 00:19:57.331 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:19:57.331 { 00:19:57.331 "nqn": "nqn.2016-06.io.spdk:cnode21272", 00:19:57.331 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:57.331 "method": "nvmf_create_subsystem", 00:19:57.331 "req_id": 1 00:19:57.331 } 00:19:57.331 Got JSON-RPC error response 00:19:57.331 response: 00:19:57.331 { 00:19:57.331 "code": -32602, 00:19:57.331 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:57.332 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:57.332 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:57.332 10:52:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32028 00:19:57.332 [2024-11-28 10:52:36.172596] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32028: invalid model number 'SPDK_Controller' 00:19:57.332 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:19:57.332 { 00:19:57.332 "nqn": "nqn.2016-06.io.spdk:cnode32028", 00:19:57.332 "model_number": "SPDK_Controller\u001f", 00:19:57.332 "method": "nvmf_create_subsystem", 00:19:57.332 "req_id": 1 00:19:57.332 } 00:19:57.332 Got JSON-RPC error response 00:19:57.332 response: 00:19:57.332 { 00:19:57.332 "code": -32602, 00:19:57.332 "message": "Invalid MN SPDK_Controller\u001f" 00:19:57.332 }' 00:19:57.332 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:19:57.332 { 00:19:57.332 "nqn": "nqn.2016-06.io.spdk:cnode32028", 00:19:57.332 "model_number": "SPDK_Controller\u001f", 00:19:57.332 "method": "nvmf_create_subsystem", 00:19:57.332 "req_id": 1 00:19:57.332 } 00:19:57.332 Got JSON-RPC error response 00:19:57.332 response: 00:19:57.332 { 00:19:57.332 "code": -32602, 00:19:57.332 "message": "Invalid MN SPDK_Controller\u001f" 00:19:57.332 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:57.332 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:19:57.332 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:19:57.332 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:57.332 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:57.332 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:57.332 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:57.332 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '9J:T~2+Nv.KH=0ZJQ>-S' 00:19:57.592 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '9J:T~2+Nv.KH=0ZJQ>-S' nqn.2016-06.io.spdk:cnode7714 00:19:57.852 [2024-11-28 10:52:36.533868] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7714: invalid serial number '9J:T~2+Nv.KH=0ZJQ>-S' 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:19:57.852 { 00:19:57.852 "nqn": "nqn.2016-06.io.spdk:cnode7714", 00:19:57.852 "serial_number": "9J:T~2+Nv.KH=0ZJQ>\u007f-S", 00:19:57.852 "method": "nvmf_create_subsystem", 00:19:57.852 "req_id": 1 00:19:57.852 } 00:19:57.852 Got JSON-RPC error response 00:19:57.852 response: 00:19:57.852 { 00:19:57.852 "code": -32602, 00:19:57.852 "message": "Invalid SN 9J:T~2+Nv.KH=0ZJQ>\u007f-S" 00:19:57.852 }' 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:19:57.852 { 00:19:57.852 "nqn": "nqn.2016-06.io.spdk:cnode7714", 00:19:57.852 "serial_number": "9J:T~2+Nv.KH=0ZJQ>\u007f-S", 00:19:57.852 "method": "nvmf_create_subsystem", 00:19:57.852 "req_id": 1 00:19:57.852 } 00:19:57.852 Got JSON-RPC error response 00:19:57.852 response: 00:19:57.852 { 00:19:57.852 "code": -32602, 00:19:57.852 "message": "Invalid SN 9J:T~2+Nv.KH=0ZJQ>\u007f-S" 00:19:57.852 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:19:57.852 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:19:57.853 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:19:57.854 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.854 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:57.854 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:19:57.854 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:19:57.854 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:19:57.854 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:57.854 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:19:58.114 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ # == \- ]] 00:19:58.115 10:52:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '#kS,2y5kz}gcNu~7u0[d\$"iQ:53 ver2_l ? ver1_l : ver2_l) )) 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:02.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.354 --rc genhtml_branch_coverage=1 00:20:02.354 --rc genhtml_function_coverage=1 00:20:02.354 --rc genhtml_legend=1 00:20:02.354 --rc geninfo_all_blocks=1 00:20:02.354 --rc geninfo_unexecuted_blocks=1 00:20:02.354 00:20:02.354 ' 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:02.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.354 --rc genhtml_branch_coverage=1 00:20:02.354 --rc genhtml_function_coverage=1 00:20:02.354 --rc genhtml_legend=1 00:20:02.354 --rc geninfo_all_blocks=1 00:20:02.354 --rc geninfo_unexecuted_blocks=1 00:20:02.354 00:20:02.354 ' 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:02.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.354 --rc genhtml_branch_coverage=1 00:20:02.354 --rc genhtml_function_coverage=1 00:20:02.354 --rc genhtml_legend=1 00:20:02.354 --rc geninfo_all_blocks=1 00:20:02.354 --rc geninfo_unexecuted_blocks=1 00:20:02.354 00:20:02.354 ' 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:02.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.354 --rc genhtml_branch_coverage=1 00:20:02.354 --rc genhtml_function_coverage=1 00:20:02.354 --rc genhtml_legend=1 00:20:02.354 --rc geninfo_all_blocks=1 00:20:02.354 --rc geninfo_unexecuted_blocks=1 00:20:02.354 00:20:02.354 ' 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.354 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:02.355 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:20:02.355 10:52:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:10.482 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:10.482 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:10.482 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:10.482 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:10.482 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:10.483 10:52:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:10.483 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:10.483 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:10.483 altname enp217s0f0np0 00:20:10.483 altname ens818f0np0 00:20:10.483 inet 192.168.100.8/24 scope global mlx_0_0 00:20:10.483 valid_lft forever preferred_lft forever 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:10.483 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:10.483 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:10.483 altname enp217s0f1np1 00:20:10.483 altname ens818f1np1 00:20:10.483 inet 192.168.100.9/24 scope global mlx_0_1 00:20:10.483 valid_lft forever preferred_lft forever 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:10.483 192.168.100.9' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:10.483 192.168.100.9' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:10.483 192.168.100.9' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:10.483 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1695483 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1695483 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1695483 ']' 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.484 10:52:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:10.484 [2024-11-28 10:52:49.238965] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:10.484 [2024-11-28 10:52:49.239057] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.744 [2024-11-28 10:52:49.391337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:10.744 [2024-11-28 10:52:49.489822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.744 [2024-11-28 10:52:49.489869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.744 [2024-11-28 10:52:49.489885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.744 [2024-11-28 10:52:49.489915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.744 [2024-11-28 10:52:49.489925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.744 [2024-11-28 10:52:49.492338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.744 [2024-11-28 10:52:49.492398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.744 [2024-11-28 10:52:49.492404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.312 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.312 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:20:11.312 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.312 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.312 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:11.312 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.312 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:11.312 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.312 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:11.312 [2024-11-28 10:52:50.125670] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f35e196a940) succeed. 00:20:11.312 [2024-11-28 10:52:50.135067] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f35e1926940) succeed. 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:11.571 [2024-11-28 10:52:50.361076] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:11.571 NULL1 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1695766 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.571 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.572 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.831 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:12.091 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.092 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:12.092 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:12.092 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.092 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:12.351 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.351 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:12.351 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:12.351 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.351 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:12.920 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.920 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:12.920 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:12.920 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.920 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:13.179 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.179 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:13.179 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:13.179 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.179 10:52:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:13.438 10:52:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.438 10:52:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:13.438 10:52:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:13.438 10:52:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.438 10:52:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.006 10:52:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.006 10:52:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:14.006 10:52:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:14.006 10:52:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.006 10:52:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.266 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.266 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:14.266 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:14.266 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.266 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.525 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.525 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:14.525 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:14.525 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.525 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.093 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.093 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:15.093 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:15.093 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.093 10:52:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.353 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.353 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:15.353 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:15.353 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.353 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.612 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.612 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:15.612 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:15.612 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.612 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.181 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.181 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:16.181 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.181 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.181 10:52:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.439 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.439 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:16.439 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.439 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.439 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.698 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.698 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:16.698 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.698 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.698 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.268 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.268 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:17.268 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.268 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.268 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.527 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.527 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:17.527 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.527 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.527 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.787 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.787 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:17.787 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.787 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.787 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:18.356 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.356 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:18.356 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:18.356 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.356 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:18.616 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.616 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:18.616 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:18.616 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.616 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:18.875 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.875 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:18.876 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:18.876 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.876 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.445 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.445 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:19.445 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.445 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.445 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.704 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.704 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:19.704 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.704 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.704 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.964 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.964 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:19.964 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.964 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.964 10:52:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:20.532 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.532 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:20.532 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.532 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.532 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:20.792 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.792 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:20.792 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.792 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.792 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:21.051 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.051 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:21.051 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.051 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.051 10:52:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:21.691 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.691 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:21.691 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.691 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.691 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:22.022 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.022 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:22.022 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.022 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.022 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:22.022 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.308 10:53:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1695766 00:20:22.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1695766) - No such process 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1695766 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:22.308 rmmod nvme_rdma 00:20:22.308 rmmod nvme_fabrics 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1695483 ']' 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1695483 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1695483 ']' 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1695483 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1695483 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1695483' 00:20:22.308 killing process with pid 1695483 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1695483 00:20:22.308 10:53:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1695483 00:20:24.251 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:24.252 00:20:24.252 real 0m21.658s 00:20:24.252 user 0m44.596s 00:20:24.252 sys 0m10.535s 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.252 ************************************ 00:20:24.252 END TEST nvmf_connect_stress 00:20:24.252 ************************************ 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:24.252 ************************************ 00:20:24.252 START TEST nvmf_fused_ordering 00:20:24.252 ************************************ 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:20:24.252 * Looking for test storage... 00:20:24.252 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:24.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.252 --rc genhtml_branch_coverage=1 00:20:24.252 --rc genhtml_function_coverage=1 00:20:24.252 --rc genhtml_legend=1 00:20:24.252 --rc geninfo_all_blocks=1 00:20:24.252 --rc geninfo_unexecuted_blocks=1 00:20:24.252 00:20:24.252 ' 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:24.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.252 --rc genhtml_branch_coverage=1 00:20:24.252 --rc genhtml_function_coverage=1 00:20:24.252 --rc genhtml_legend=1 00:20:24.252 --rc geninfo_all_blocks=1 00:20:24.252 --rc geninfo_unexecuted_blocks=1 00:20:24.252 00:20:24.252 ' 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:24.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.252 --rc genhtml_branch_coverage=1 00:20:24.252 --rc genhtml_function_coverage=1 00:20:24.252 --rc genhtml_legend=1 00:20:24.252 --rc geninfo_all_blocks=1 00:20:24.252 --rc geninfo_unexecuted_blocks=1 00:20:24.252 00:20:24.252 ' 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:24.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.252 --rc genhtml_branch_coverage=1 00:20:24.252 --rc genhtml_function_coverage=1 00:20:24.252 --rc genhtml_legend=1 00:20:24.252 --rc geninfo_all_blocks=1 00:20:24.252 --rc geninfo_unexecuted_blocks=1 00:20:24.252 00:20:24.252 ' 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:20:24.252 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:24.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:20:24.253 10:53:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:32.378 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:32.378 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:32.378 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:32.378 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:32.379 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:32.379 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:32.379 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:32.379 altname enp217s0f0np0 00:20:32.379 altname ens818f0np0 00:20:32.379 inet 192.168.100.8/24 scope global mlx_0_0 00:20:32.379 valid_lft forever preferred_lft forever 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:32.379 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:32.379 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:32.379 altname enp217s0f1np1 00:20:32.379 altname ens818f1np1 00:20:32.379 inet 192.168.100.9/24 scope global mlx_0_1 00:20:32.379 valid_lft forever preferred_lft forever 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:32.379 192.168.100.9' 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:32.379 192.168.100.9' 00:20:32.379 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:32.380 192.168.100.9' 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1701798 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1701798 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1701798 ']' 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.380 10:53:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.380 [2024-11-28 10:53:10.870821] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:32.380 [2024-11-28 10:53:10.870924] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.380 [2024-11-28 10:53:11.028303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.380 [2024-11-28 10:53:11.123516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.380 [2024-11-28 10:53:11.123562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.380 [2024-11-28 10:53:11.123575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.380 [2024-11-28 10:53:11.123587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.380 [2024-11-28 10:53:11.123596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.380 [2024-11-28 10:53:11.124940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.948 [2024-11-28 10:53:11.734819] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f57c9f61940) succeed. 00:20:32.948 [2024-11-28 10:53:11.744040] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f57c9f1d940) succeed. 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.948 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:33.207 [2024-11-28 10:53:11.836326] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:33.207 NULL1 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:33.207 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.208 10:53:11 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:33.208 [2024-11-28 10:53:11.917872] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:33.208 [2024-11-28 10:53:11.917936] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701865 ] 00:20:33.468 Attached to nqn.2016-06.io.spdk:cnode1 00:20:33.468 Namespace ID: 1 size: 1GB 00:20:33.468 fused_ordering(0) 00:20:33.468 fused_ordering(1) 00:20:33.468 fused_ordering(2) 00:20:33.468 fused_ordering(3) 00:20:33.468 fused_ordering(4) 00:20:33.468 fused_ordering(5) 00:20:33.468 fused_ordering(6) 00:20:33.468 fused_ordering(7) 00:20:33.468 fused_ordering(8) 00:20:33.468 fused_ordering(9) 00:20:33.468 fused_ordering(10) 00:20:33.468 fused_ordering(11) 00:20:33.468 fused_ordering(12) 00:20:33.468 fused_ordering(13) 00:20:33.468 fused_ordering(14) 00:20:33.468 fused_ordering(15) 00:20:33.468 fused_ordering(16) 00:20:33.469 fused_ordering(17) 00:20:33.469 fused_ordering(18) 00:20:33.469 fused_ordering(19) 00:20:33.469 fused_ordering(20) 00:20:33.469 fused_ordering(21) 00:20:33.469 fused_ordering(22) 00:20:33.469 fused_ordering(23) 00:20:33.469 fused_ordering(24) 00:20:33.469 fused_ordering(25) 00:20:33.469 fused_ordering(26) 00:20:33.469 fused_ordering(27) 00:20:33.469 fused_ordering(28) 00:20:33.469 fused_ordering(29) 00:20:33.469 fused_ordering(30) 00:20:33.469 fused_ordering(31) 00:20:33.469 fused_ordering(32) 00:20:33.469 fused_ordering(33) 00:20:33.469 fused_ordering(34) 00:20:33.469 fused_ordering(35) 00:20:33.469 fused_ordering(36) 00:20:33.469 fused_ordering(37) 00:20:33.469 fused_ordering(38) 00:20:33.469 fused_ordering(39) 00:20:33.469 fused_ordering(40) 00:20:33.469 fused_ordering(41) 00:20:33.469 fused_ordering(42) 00:20:33.469 fused_ordering(43) 00:20:33.469 fused_ordering(44) 00:20:33.469 fused_ordering(45) 00:20:33.469 fused_ordering(46) 00:20:33.469 fused_ordering(47) 00:20:33.469 fused_ordering(48) 00:20:33.469 fused_ordering(49) 00:20:33.469 fused_ordering(50) 00:20:33.469 fused_ordering(51) 00:20:33.469 fused_ordering(52) 00:20:33.469 fused_ordering(53) 00:20:33.469 fused_ordering(54) 00:20:33.469 fused_ordering(55) 00:20:33.469 fused_ordering(56) 00:20:33.469 fused_ordering(57) 00:20:33.469 fused_ordering(58) 00:20:33.469 fused_ordering(59) 00:20:33.469 fused_ordering(60) 00:20:33.469 fused_ordering(61) 00:20:33.469 fused_ordering(62) 00:20:33.469 fused_ordering(63) 00:20:33.469 fused_ordering(64) 00:20:33.469 fused_ordering(65) 00:20:33.469 fused_ordering(66) 00:20:33.469 fused_ordering(67) 00:20:33.469 fused_ordering(68) 00:20:33.469 fused_ordering(69) 00:20:33.469 fused_ordering(70) 00:20:33.469 fused_ordering(71) 00:20:33.469 fused_ordering(72) 00:20:33.469 fused_ordering(73) 00:20:33.469 fused_ordering(74) 00:20:33.469 fused_ordering(75) 00:20:33.469 fused_ordering(76) 00:20:33.469 fused_ordering(77) 00:20:33.469 fused_ordering(78) 00:20:33.469 fused_ordering(79) 00:20:33.469 fused_ordering(80) 00:20:33.469 fused_ordering(81) 00:20:33.469 fused_ordering(82) 00:20:33.469 fused_ordering(83) 00:20:33.469 fused_ordering(84) 00:20:33.469 fused_ordering(85) 00:20:33.469 fused_ordering(86) 00:20:33.469 fused_ordering(87) 00:20:33.469 fused_ordering(88) 00:20:33.469 fused_ordering(89) 00:20:33.469 fused_ordering(90) 00:20:33.469 fused_ordering(91) 00:20:33.469 fused_ordering(92) 00:20:33.469 fused_ordering(93) 00:20:33.469 fused_ordering(94) 00:20:33.469 fused_ordering(95) 00:20:33.469 fused_ordering(96) 00:20:33.469 fused_ordering(97) 00:20:33.469 fused_ordering(98) 00:20:33.469 fused_ordering(99) 00:20:33.469 fused_ordering(100) 00:20:33.469 fused_ordering(101) 00:20:33.469 fused_ordering(102) 00:20:33.469 fused_ordering(103) 00:20:33.469 fused_ordering(104) 00:20:33.469 fused_ordering(105) 00:20:33.469 fused_ordering(106) 00:20:33.469 fused_ordering(107) 00:20:33.469 fused_ordering(108) 00:20:33.469 fused_ordering(109) 00:20:33.469 fused_ordering(110) 00:20:33.469 fused_ordering(111) 00:20:33.469 fused_ordering(112) 00:20:33.469 fused_ordering(113) 00:20:33.469 fused_ordering(114) 00:20:33.469 fused_ordering(115) 00:20:33.469 fused_ordering(116) 00:20:33.469 fused_ordering(117) 00:20:33.469 fused_ordering(118) 00:20:33.469 fused_ordering(119) 00:20:33.469 fused_ordering(120) 00:20:33.469 fused_ordering(121) 00:20:33.469 fused_ordering(122) 00:20:33.469 fused_ordering(123) 00:20:33.469 fused_ordering(124) 00:20:33.469 fused_ordering(125) 00:20:33.469 fused_ordering(126) 00:20:33.469 fused_ordering(127) 00:20:33.469 fused_ordering(128) 00:20:33.469 fused_ordering(129) 00:20:33.469 fused_ordering(130) 00:20:33.469 fused_ordering(131) 00:20:33.469 fused_ordering(132) 00:20:33.469 fused_ordering(133) 00:20:33.469 fused_ordering(134) 00:20:33.469 fused_ordering(135) 00:20:33.469 fused_ordering(136) 00:20:33.469 fused_ordering(137) 00:20:33.469 fused_ordering(138) 00:20:33.469 fused_ordering(139) 00:20:33.469 fused_ordering(140) 00:20:33.469 fused_ordering(141) 00:20:33.469 fused_ordering(142) 00:20:33.469 fused_ordering(143) 00:20:33.469 fused_ordering(144) 00:20:33.469 fused_ordering(145) 00:20:33.469 fused_ordering(146) 00:20:33.469 fused_ordering(147) 00:20:33.469 fused_ordering(148) 00:20:33.469 fused_ordering(149) 00:20:33.469 fused_ordering(150) 00:20:33.469 fused_ordering(151) 00:20:33.469 fused_ordering(152) 00:20:33.469 fused_ordering(153) 00:20:33.469 fused_ordering(154) 00:20:33.469 fused_ordering(155) 00:20:33.469 fused_ordering(156) 00:20:33.469 fused_ordering(157) 00:20:33.469 fused_ordering(158) 00:20:33.469 fused_ordering(159) 00:20:33.469 fused_ordering(160) 00:20:33.469 fused_ordering(161) 00:20:33.469 fused_ordering(162) 00:20:33.469 fused_ordering(163) 00:20:33.469 fused_ordering(164) 00:20:33.469 fused_ordering(165) 00:20:33.469 fused_ordering(166) 00:20:33.469 fused_ordering(167) 00:20:33.469 fused_ordering(168) 00:20:33.469 fused_ordering(169) 00:20:33.469 fused_ordering(170) 00:20:33.469 fused_ordering(171) 00:20:33.469 fused_ordering(172) 00:20:33.469 fused_ordering(173) 00:20:33.469 fused_ordering(174) 00:20:33.469 fused_ordering(175) 00:20:33.469 fused_ordering(176) 00:20:33.469 fused_ordering(177) 00:20:33.469 fused_ordering(178) 00:20:33.469 fused_ordering(179) 00:20:33.469 fused_ordering(180) 00:20:33.469 fused_ordering(181) 00:20:33.469 fused_ordering(182) 00:20:33.469 fused_ordering(183) 00:20:33.469 fused_ordering(184) 00:20:33.469 fused_ordering(185) 00:20:33.469 fused_ordering(186) 00:20:33.469 fused_ordering(187) 00:20:33.469 fused_ordering(188) 00:20:33.469 fused_ordering(189) 00:20:33.469 fused_ordering(190) 00:20:33.469 fused_ordering(191) 00:20:33.469 fused_ordering(192) 00:20:33.469 fused_ordering(193) 00:20:33.469 fused_ordering(194) 00:20:33.469 fused_ordering(195) 00:20:33.469 fused_ordering(196) 00:20:33.469 fused_ordering(197) 00:20:33.469 fused_ordering(198) 00:20:33.469 fused_ordering(199) 00:20:33.469 fused_ordering(200) 00:20:33.469 fused_ordering(201) 00:20:33.469 fused_ordering(202) 00:20:33.469 fused_ordering(203) 00:20:33.469 fused_ordering(204) 00:20:33.469 fused_ordering(205) 00:20:33.469 fused_ordering(206) 00:20:33.469 fused_ordering(207) 00:20:33.469 fused_ordering(208) 00:20:33.469 fused_ordering(209) 00:20:33.469 fused_ordering(210) 00:20:33.469 fused_ordering(211) 00:20:33.469 fused_ordering(212) 00:20:33.469 fused_ordering(213) 00:20:33.469 fused_ordering(214) 00:20:33.469 fused_ordering(215) 00:20:33.469 fused_ordering(216) 00:20:33.469 fused_ordering(217) 00:20:33.469 fused_ordering(218) 00:20:33.469 fused_ordering(219) 00:20:33.469 fused_ordering(220) 00:20:33.469 fused_ordering(221) 00:20:33.469 fused_ordering(222) 00:20:33.469 fused_ordering(223) 00:20:33.469 fused_ordering(224) 00:20:33.469 fused_ordering(225) 00:20:33.469 fused_ordering(226) 00:20:33.469 fused_ordering(227) 00:20:33.469 fused_ordering(228) 00:20:33.469 fused_ordering(229) 00:20:33.469 fused_ordering(230) 00:20:33.469 fused_ordering(231) 00:20:33.469 fused_ordering(232) 00:20:33.469 fused_ordering(233) 00:20:33.469 fused_ordering(234) 00:20:33.469 fused_ordering(235) 00:20:33.469 fused_ordering(236) 00:20:33.469 fused_ordering(237) 00:20:33.469 fused_ordering(238) 00:20:33.469 fused_ordering(239) 00:20:33.469 fused_ordering(240) 00:20:33.469 fused_ordering(241) 00:20:33.469 fused_ordering(242) 00:20:33.469 fused_ordering(243) 00:20:33.469 fused_ordering(244) 00:20:33.469 fused_ordering(245) 00:20:33.469 fused_ordering(246) 00:20:33.469 fused_ordering(247) 00:20:33.469 fused_ordering(248) 00:20:33.469 fused_ordering(249) 00:20:33.469 fused_ordering(250) 00:20:33.469 fused_ordering(251) 00:20:33.469 fused_ordering(252) 00:20:33.469 fused_ordering(253) 00:20:33.469 fused_ordering(254) 00:20:33.469 fused_ordering(255) 00:20:33.469 fused_ordering(256) 00:20:33.469 fused_ordering(257) 00:20:33.469 fused_ordering(258) 00:20:33.469 fused_ordering(259) 00:20:33.469 fused_ordering(260) 00:20:33.469 fused_ordering(261) 00:20:33.469 fused_ordering(262) 00:20:33.469 fused_ordering(263) 00:20:33.469 fused_ordering(264) 00:20:33.469 fused_ordering(265) 00:20:33.469 fused_ordering(266) 00:20:33.469 fused_ordering(267) 00:20:33.469 fused_ordering(268) 00:20:33.469 fused_ordering(269) 00:20:33.469 fused_ordering(270) 00:20:33.469 fused_ordering(271) 00:20:33.469 fused_ordering(272) 00:20:33.469 fused_ordering(273) 00:20:33.469 fused_ordering(274) 00:20:33.469 fused_ordering(275) 00:20:33.469 fused_ordering(276) 00:20:33.469 fused_ordering(277) 00:20:33.469 fused_ordering(278) 00:20:33.469 fused_ordering(279) 00:20:33.469 fused_ordering(280) 00:20:33.469 fused_ordering(281) 00:20:33.469 fused_ordering(282) 00:20:33.469 fused_ordering(283) 00:20:33.469 fused_ordering(284) 00:20:33.469 fused_ordering(285) 00:20:33.469 fused_ordering(286) 00:20:33.469 fused_ordering(287) 00:20:33.469 fused_ordering(288) 00:20:33.469 fused_ordering(289) 00:20:33.469 fused_ordering(290) 00:20:33.469 fused_ordering(291) 00:20:33.469 fused_ordering(292) 00:20:33.469 fused_ordering(293) 00:20:33.469 fused_ordering(294) 00:20:33.469 fused_ordering(295) 00:20:33.470 fused_ordering(296) 00:20:33.470 fused_ordering(297) 00:20:33.470 fused_ordering(298) 00:20:33.470 fused_ordering(299) 00:20:33.470 fused_ordering(300) 00:20:33.470 fused_ordering(301) 00:20:33.470 fused_ordering(302) 00:20:33.470 fused_ordering(303) 00:20:33.470 fused_ordering(304) 00:20:33.470 fused_ordering(305) 00:20:33.470 fused_ordering(306) 00:20:33.470 fused_ordering(307) 00:20:33.470 fused_ordering(308) 00:20:33.470 fused_ordering(309) 00:20:33.470 fused_ordering(310) 00:20:33.470 fused_ordering(311) 00:20:33.470 fused_ordering(312) 00:20:33.470 fused_ordering(313) 00:20:33.470 fused_ordering(314) 00:20:33.470 fused_ordering(315) 00:20:33.470 fused_ordering(316) 00:20:33.470 fused_ordering(317) 00:20:33.470 fused_ordering(318) 00:20:33.470 fused_ordering(319) 00:20:33.470 fused_ordering(320) 00:20:33.470 fused_ordering(321) 00:20:33.470 fused_ordering(322) 00:20:33.470 fused_ordering(323) 00:20:33.470 fused_ordering(324) 00:20:33.470 fused_ordering(325) 00:20:33.470 fused_ordering(326) 00:20:33.470 fused_ordering(327) 00:20:33.470 fused_ordering(328) 00:20:33.470 fused_ordering(329) 00:20:33.470 fused_ordering(330) 00:20:33.470 fused_ordering(331) 00:20:33.470 fused_ordering(332) 00:20:33.470 fused_ordering(333) 00:20:33.470 fused_ordering(334) 00:20:33.470 fused_ordering(335) 00:20:33.470 fused_ordering(336) 00:20:33.470 fused_ordering(337) 00:20:33.470 fused_ordering(338) 00:20:33.470 fused_ordering(339) 00:20:33.470 fused_ordering(340) 00:20:33.470 fused_ordering(341) 00:20:33.470 fused_ordering(342) 00:20:33.470 fused_ordering(343) 00:20:33.470 fused_ordering(344) 00:20:33.470 fused_ordering(345) 00:20:33.470 fused_ordering(346) 00:20:33.470 fused_ordering(347) 00:20:33.470 fused_ordering(348) 00:20:33.470 fused_ordering(349) 00:20:33.470 fused_ordering(350) 00:20:33.470 fused_ordering(351) 00:20:33.470 fused_ordering(352) 00:20:33.470 fused_ordering(353) 00:20:33.470 fused_ordering(354) 00:20:33.470 fused_ordering(355) 00:20:33.470 fused_ordering(356) 00:20:33.470 fused_ordering(357) 00:20:33.470 fused_ordering(358) 00:20:33.470 fused_ordering(359) 00:20:33.470 fused_ordering(360) 00:20:33.470 fused_ordering(361) 00:20:33.470 fused_ordering(362) 00:20:33.470 fused_ordering(363) 00:20:33.470 fused_ordering(364) 00:20:33.470 fused_ordering(365) 00:20:33.470 fused_ordering(366) 00:20:33.470 fused_ordering(367) 00:20:33.470 fused_ordering(368) 00:20:33.470 fused_ordering(369) 00:20:33.470 fused_ordering(370) 00:20:33.470 fused_ordering(371) 00:20:33.470 fused_ordering(372) 00:20:33.470 fused_ordering(373) 00:20:33.470 fused_ordering(374) 00:20:33.470 fused_ordering(375) 00:20:33.470 fused_ordering(376) 00:20:33.470 fused_ordering(377) 00:20:33.470 fused_ordering(378) 00:20:33.470 fused_ordering(379) 00:20:33.470 fused_ordering(380) 00:20:33.470 fused_ordering(381) 00:20:33.470 fused_ordering(382) 00:20:33.470 fused_ordering(383) 00:20:33.470 fused_ordering(384) 00:20:33.470 fused_ordering(385) 00:20:33.470 fused_ordering(386) 00:20:33.470 fused_ordering(387) 00:20:33.470 fused_ordering(388) 00:20:33.470 fused_ordering(389) 00:20:33.470 fused_ordering(390) 00:20:33.470 fused_ordering(391) 00:20:33.470 fused_ordering(392) 00:20:33.470 fused_ordering(393) 00:20:33.470 fused_ordering(394) 00:20:33.470 fused_ordering(395) 00:20:33.470 fused_ordering(396) 00:20:33.470 fused_ordering(397) 00:20:33.470 fused_ordering(398) 00:20:33.470 fused_ordering(399) 00:20:33.470 fused_ordering(400) 00:20:33.470 fused_ordering(401) 00:20:33.470 fused_ordering(402) 00:20:33.470 fused_ordering(403) 00:20:33.470 fused_ordering(404) 00:20:33.470 fused_ordering(405) 00:20:33.470 fused_ordering(406) 00:20:33.470 fused_ordering(407) 00:20:33.470 fused_ordering(408) 00:20:33.470 fused_ordering(409) 00:20:33.470 fused_ordering(410) 00:20:33.730 fused_ordering(411) 00:20:33.730 fused_ordering(412) 00:20:33.730 fused_ordering(413) 00:20:33.730 fused_ordering(414) 00:20:33.730 fused_ordering(415) 00:20:33.730 fused_ordering(416) 00:20:33.730 fused_ordering(417) 00:20:33.730 fused_ordering(418) 00:20:33.730 fused_ordering(419) 00:20:33.730 fused_ordering(420) 00:20:33.730 fused_ordering(421) 00:20:33.730 fused_ordering(422) 00:20:33.730 fused_ordering(423) 00:20:33.730 fused_ordering(424) 00:20:33.730 fused_ordering(425) 00:20:33.730 fused_ordering(426) 00:20:33.730 fused_ordering(427) 00:20:33.730 fused_ordering(428) 00:20:33.730 fused_ordering(429) 00:20:33.730 fused_ordering(430) 00:20:33.730 fused_ordering(431) 00:20:33.730 fused_ordering(432) 00:20:33.730 fused_ordering(433) 00:20:33.730 fused_ordering(434) 00:20:33.730 fused_ordering(435) 00:20:33.730 fused_ordering(436) 00:20:33.730 fused_ordering(437) 00:20:33.730 fused_ordering(438) 00:20:33.730 fused_ordering(439) 00:20:33.730 fused_ordering(440) 00:20:33.730 fused_ordering(441) 00:20:33.730 fused_ordering(442) 00:20:33.730 fused_ordering(443) 00:20:33.730 fused_ordering(444) 00:20:33.730 fused_ordering(445) 00:20:33.730 fused_ordering(446) 00:20:33.730 fused_ordering(447) 00:20:33.730 fused_ordering(448) 00:20:33.730 fused_ordering(449) 00:20:33.730 fused_ordering(450) 00:20:33.730 fused_ordering(451) 00:20:33.730 fused_ordering(452) 00:20:33.730 fused_ordering(453) 00:20:33.730 fused_ordering(454) 00:20:33.730 fused_ordering(455) 00:20:33.730 fused_ordering(456) 00:20:33.730 fused_ordering(457) 00:20:33.730 fused_ordering(458) 00:20:33.730 fused_ordering(459) 00:20:33.730 fused_ordering(460) 00:20:33.730 fused_ordering(461) 00:20:33.730 fused_ordering(462) 00:20:33.730 fused_ordering(463) 00:20:33.730 fused_ordering(464) 00:20:33.730 fused_ordering(465) 00:20:33.730 fused_ordering(466) 00:20:33.730 fused_ordering(467) 00:20:33.730 fused_ordering(468) 00:20:33.730 fused_ordering(469) 00:20:33.730 fused_ordering(470) 00:20:33.730 fused_ordering(471) 00:20:33.730 fused_ordering(472) 00:20:33.730 fused_ordering(473) 00:20:33.730 fused_ordering(474) 00:20:33.730 fused_ordering(475) 00:20:33.730 fused_ordering(476) 00:20:33.730 fused_ordering(477) 00:20:33.730 fused_ordering(478) 00:20:33.730 fused_ordering(479) 00:20:33.730 fused_ordering(480) 00:20:33.730 fused_ordering(481) 00:20:33.730 fused_ordering(482) 00:20:33.730 fused_ordering(483) 00:20:33.730 fused_ordering(484) 00:20:33.730 fused_ordering(485) 00:20:33.730 fused_ordering(486) 00:20:33.730 fused_ordering(487) 00:20:33.730 fused_ordering(488) 00:20:33.730 fused_ordering(489) 00:20:33.730 fused_ordering(490) 00:20:33.730 fused_ordering(491) 00:20:33.730 fused_ordering(492) 00:20:33.730 fused_ordering(493) 00:20:33.730 fused_ordering(494) 00:20:33.730 fused_ordering(495) 00:20:33.730 fused_ordering(496) 00:20:33.730 fused_ordering(497) 00:20:33.730 fused_ordering(498) 00:20:33.730 fused_ordering(499) 00:20:33.730 fused_ordering(500) 00:20:33.731 fused_ordering(501) 00:20:33.731 fused_ordering(502) 00:20:33.731 fused_ordering(503) 00:20:33.731 fused_ordering(504) 00:20:33.731 fused_ordering(505) 00:20:33.731 fused_ordering(506) 00:20:33.731 fused_ordering(507) 00:20:33.731 fused_ordering(508) 00:20:33.731 fused_ordering(509) 00:20:33.731 fused_ordering(510) 00:20:33.731 fused_ordering(511) 00:20:33.731 fused_ordering(512) 00:20:33.731 fused_ordering(513) 00:20:33.731 fused_ordering(514) 00:20:33.731 fused_ordering(515) 00:20:33.731 fused_ordering(516) 00:20:33.731 fused_ordering(517) 00:20:33.731 fused_ordering(518) 00:20:33.731 fused_ordering(519) 00:20:33.731 fused_ordering(520) 00:20:33.731 fused_ordering(521) 00:20:33.731 fused_ordering(522) 00:20:33.731 fused_ordering(523) 00:20:33.731 fused_ordering(524) 00:20:33.731 fused_ordering(525) 00:20:33.731 fused_ordering(526) 00:20:33.731 fused_ordering(527) 00:20:33.731 fused_ordering(528) 00:20:33.731 fused_ordering(529) 00:20:33.731 fused_ordering(530) 00:20:33.731 fused_ordering(531) 00:20:33.731 fused_ordering(532) 00:20:33.731 fused_ordering(533) 00:20:33.731 fused_ordering(534) 00:20:33.731 fused_ordering(535) 00:20:33.731 fused_ordering(536) 00:20:33.731 fused_ordering(537) 00:20:33.731 fused_ordering(538) 00:20:33.731 fused_ordering(539) 00:20:33.731 fused_ordering(540) 00:20:33.731 fused_ordering(541) 00:20:33.731 fused_ordering(542) 00:20:33.731 fused_ordering(543) 00:20:33.731 fused_ordering(544) 00:20:33.731 fused_ordering(545) 00:20:33.731 fused_ordering(546) 00:20:33.731 fused_ordering(547) 00:20:33.731 fused_ordering(548) 00:20:33.731 fused_ordering(549) 00:20:33.731 fused_ordering(550) 00:20:33.731 fused_ordering(551) 00:20:33.731 fused_ordering(552) 00:20:33.731 fused_ordering(553) 00:20:33.731 fused_ordering(554) 00:20:33.731 fused_ordering(555) 00:20:33.731 fused_ordering(556) 00:20:33.731 fused_ordering(557) 00:20:33.731 fused_ordering(558) 00:20:33.731 fused_ordering(559) 00:20:33.731 fused_ordering(560) 00:20:33.731 fused_ordering(561) 00:20:33.731 fused_ordering(562) 00:20:33.731 fused_ordering(563) 00:20:33.731 fused_ordering(564) 00:20:33.731 fused_ordering(565) 00:20:33.731 fused_ordering(566) 00:20:33.731 fused_ordering(567) 00:20:33.731 fused_ordering(568) 00:20:33.731 fused_ordering(569) 00:20:33.731 fused_ordering(570) 00:20:33.731 fused_ordering(571) 00:20:33.731 fused_ordering(572) 00:20:33.731 fused_ordering(573) 00:20:33.731 fused_ordering(574) 00:20:33.731 fused_ordering(575) 00:20:33.731 fused_ordering(576) 00:20:33.731 fused_ordering(577) 00:20:33.731 fused_ordering(578) 00:20:33.731 fused_ordering(579) 00:20:33.731 fused_ordering(580) 00:20:33.731 fused_ordering(581) 00:20:33.731 fused_ordering(582) 00:20:33.731 fused_ordering(583) 00:20:33.731 fused_ordering(584) 00:20:33.731 fused_ordering(585) 00:20:33.731 fused_ordering(586) 00:20:33.731 fused_ordering(587) 00:20:33.731 fused_ordering(588) 00:20:33.731 fused_ordering(589) 00:20:33.731 fused_ordering(590) 00:20:33.731 fused_ordering(591) 00:20:33.731 fused_ordering(592) 00:20:33.731 fused_ordering(593) 00:20:33.731 fused_ordering(594) 00:20:33.731 fused_ordering(595) 00:20:33.731 fused_ordering(596) 00:20:33.731 fused_ordering(597) 00:20:33.731 fused_ordering(598) 00:20:33.731 fused_ordering(599) 00:20:33.731 fused_ordering(600) 00:20:33.731 fused_ordering(601) 00:20:33.731 fused_ordering(602) 00:20:33.731 fused_ordering(603) 00:20:33.731 fused_ordering(604) 00:20:33.731 fused_ordering(605) 00:20:33.731 fused_ordering(606) 00:20:33.731 fused_ordering(607) 00:20:33.731 fused_ordering(608) 00:20:33.731 fused_ordering(609) 00:20:33.731 fused_ordering(610) 00:20:33.731 fused_ordering(611) 00:20:33.731 fused_ordering(612) 00:20:33.731 fused_ordering(613) 00:20:33.731 fused_ordering(614) 00:20:33.731 fused_ordering(615) 00:20:33.731 fused_ordering(616) 00:20:33.731 fused_ordering(617) 00:20:33.731 fused_ordering(618) 00:20:33.731 fused_ordering(619) 00:20:33.731 fused_ordering(620) 00:20:33.731 fused_ordering(621) 00:20:33.731 fused_ordering(622) 00:20:33.731 fused_ordering(623) 00:20:33.731 fused_ordering(624) 00:20:33.731 fused_ordering(625) 00:20:33.731 fused_ordering(626) 00:20:33.731 fused_ordering(627) 00:20:33.731 fused_ordering(628) 00:20:33.731 fused_ordering(629) 00:20:33.731 fused_ordering(630) 00:20:33.731 fused_ordering(631) 00:20:33.731 fused_ordering(632) 00:20:33.731 fused_ordering(633) 00:20:33.731 fused_ordering(634) 00:20:33.731 fused_ordering(635) 00:20:33.731 fused_ordering(636) 00:20:33.731 fused_ordering(637) 00:20:33.731 fused_ordering(638) 00:20:33.731 fused_ordering(639) 00:20:33.731 fused_ordering(640) 00:20:33.731 fused_ordering(641) 00:20:33.731 fused_ordering(642) 00:20:33.731 fused_ordering(643) 00:20:33.731 fused_ordering(644) 00:20:33.731 fused_ordering(645) 00:20:33.731 fused_ordering(646) 00:20:33.731 fused_ordering(647) 00:20:33.731 fused_ordering(648) 00:20:33.731 fused_ordering(649) 00:20:33.731 fused_ordering(650) 00:20:33.731 fused_ordering(651) 00:20:33.731 fused_ordering(652) 00:20:33.731 fused_ordering(653) 00:20:33.731 fused_ordering(654) 00:20:33.731 fused_ordering(655) 00:20:33.731 fused_ordering(656) 00:20:33.731 fused_ordering(657) 00:20:33.731 fused_ordering(658) 00:20:33.731 fused_ordering(659) 00:20:33.731 fused_ordering(660) 00:20:33.731 fused_ordering(661) 00:20:33.731 fused_ordering(662) 00:20:33.731 fused_ordering(663) 00:20:33.731 fused_ordering(664) 00:20:33.731 fused_ordering(665) 00:20:33.731 fused_ordering(666) 00:20:33.731 fused_ordering(667) 00:20:33.731 fused_ordering(668) 00:20:33.731 fused_ordering(669) 00:20:33.731 fused_ordering(670) 00:20:33.731 fused_ordering(671) 00:20:33.731 fused_ordering(672) 00:20:33.731 fused_ordering(673) 00:20:33.731 fused_ordering(674) 00:20:33.731 fused_ordering(675) 00:20:33.731 fused_ordering(676) 00:20:33.731 fused_ordering(677) 00:20:33.731 fused_ordering(678) 00:20:33.731 fused_ordering(679) 00:20:33.731 fused_ordering(680) 00:20:33.731 fused_ordering(681) 00:20:33.731 fused_ordering(682) 00:20:33.731 fused_ordering(683) 00:20:33.731 fused_ordering(684) 00:20:33.731 fused_ordering(685) 00:20:33.731 fused_ordering(686) 00:20:33.731 fused_ordering(687) 00:20:33.731 fused_ordering(688) 00:20:33.731 fused_ordering(689) 00:20:33.731 fused_ordering(690) 00:20:33.731 fused_ordering(691) 00:20:33.731 fused_ordering(692) 00:20:33.731 fused_ordering(693) 00:20:33.731 fused_ordering(694) 00:20:33.731 fused_ordering(695) 00:20:33.731 fused_ordering(696) 00:20:33.731 fused_ordering(697) 00:20:33.731 fused_ordering(698) 00:20:33.731 fused_ordering(699) 00:20:33.731 fused_ordering(700) 00:20:33.731 fused_ordering(701) 00:20:33.731 fused_ordering(702) 00:20:33.731 fused_ordering(703) 00:20:33.731 fused_ordering(704) 00:20:33.731 fused_ordering(705) 00:20:33.731 fused_ordering(706) 00:20:33.731 fused_ordering(707) 00:20:33.731 fused_ordering(708) 00:20:33.731 fused_ordering(709) 00:20:33.731 fused_ordering(710) 00:20:33.731 fused_ordering(711) 00:20:33.731 fused_ordering(712) 00:20:33.731 fused_ordering(713) 00:20:33.731 fused_ordering(714) 00:20:33.731 fused_ordering(715) 00:20:33.731 fused_ordering(716) 00:20:33.731 fused_ordering(717) 00:20:33.731 fused_ordering(718) 00:20:33.731 fused_ordering(719) 00:20:33.731 fused_ordering(720) 00:20:33.731 fused_ordering(721) 00:20:33.731 fused_ordering(722) 00:20:33.731 fused_ordering(723) 00:20:33.731 fused_ordering(724) 00:20:33.731 fused_ordering(725) 00:20:33.731 fused_ordering(726) 00:20:33.731 fused_ordering(727) 00:20:33.731 fused_ordering(728) 00:20:33.731 fused_ordering(729) 00:20:33.731 fused_ordering(730) 00:20:33.731 fused_ordering(731) 00:20:33.731 fused_ordering(732) 00:20:33.731 fused_ordering(733) 00:20:33.731 fused_ordering(734) 00:20:33.731 fused_ordering(735) 00:20:33.731 fused_ordering(736) 00:20:33.731 fused_ordering(737) 00:20:33.731 fused_ordering(738) 00:20:33.731 fused_ordering(739) 00:20:33.731 fused_ordering(740) 00:20:33.731 fused_ordering(741) 00:20:33.731 fused_ordering(742) 00:20:33.731 fused_ordering(743) 00:20:33.731 fused_ordering(744) 00:20:33.731 fused_ordering(745) 00:20:33.731 fused_ordering(746) 00:20:33.731 fused_ordering(747) 00:20:33.731 fused_ordering(748) 00:20:33.731 fused_ordering(749) 00:20:33.731 fused_ordering(750) 00:20:33.731 fused_ordering(751) 00:20:33.731 fused_ordering(752) 00:20:33.731 fused_ordering(753) 00:20:33.731 fused_ordering(754) 00:20:33.731 fused_ordering(755) 00:20:33.731 fused_ordering(756) 00:20:33.731 fused_ordering(757) 00:20:33.731 fused_ordering(758) 00:20:33.731 fused_ordering(759) 00:20:33.731 fused_ordering(760) 00:20:33.731 fused_ordering(761) 00:20:33.731 fused_ordering(762) 00:20:33.731 fused_ordering(763) 00:20:33.731 fused_ordering(764) 00:20:33.731 fused_ordering(765) 00:20:33.731 fused_ordering(766) 00:20:33.731 fused_ordering(767) 00:20:33.731 fused_ordering(768) 00:20:33.731 fused_ordering(769) 00:20:33.731 fused_ordering(770) 00:20:33.731 fused_ordering(771) 00:20:33.731 fused_ordering(772) 00:20:33.732 fused_ordering(773) 00:20:33.732 fused_ordering(774) 00:20:33.732 fused_ordering(775) 00:20:33.732 fused_ordering(776) 00:20:33.732 fused_ordering(777) 00:20:33.732 fused_ordering(778) 00:20:33.732 fused_ordering(779) 00:20:33.732 fused_ordering(780) 00:20:33.732 fused_ordering(781) 00:20:33.732 fused_ordering(782) 00:20:33.732 fused_ordering(783) 00:20:33.732 fused_ordering(784) 00:20:33.732 fused_ordering(785) 00:20:33.732 fused_ordering(786) 00:20:33.732 fused_ordering(787) 00:20:33.732 fused_ordering(788) 00:20:33.732 fused_ordering(789) 00:20:33.732 fused_ordering(790) 00:20:33.732 fused_ordering(791) 00:20:33.732 fused_ordering(792) 00:20:33.732 fused_ordering(793) 00:20:33.732 fused_ordering(794) 00:20:33.732 fused_ordering(795) 00:20:33.732 fused_ordering(796) 00:20:33.732 fused_ordering(797) 00:20:33.732 fused_ordering(798) 00:20:33.732 fused_ordering(799) 00:20:33.732 fused_ordering(800) 00:20:33.732 fused_ordering(801) 00:20:33.732 fused_ordering(802) 00:20:33.732 fused_ordering(803) 00:20:33.732 fused_ordering(804) 00:20:33.732 fused_ordering(805) 00:20:33.732 fused_ordering(806) 00:20:33.732 fused_ordering(807) 00:20:33.732 fused_ordering(808) 00:20:33.732 fused_ordering(809) 00:20:33.732 fused_ordering(810) 00:20:33.732 fused_ordering(811) 00:20:33.732 fused_ordering(812) 00:20:33.732 fused_ordering(813) 00:20:33.732 fused_ordering(814) 00:20:33.732 fused_ordering(815) 00:20:33.732 fused_ordering(816) 00:20:33.732 fused_ordering(817) 00:20:33.732 fused_ordering(818) 00:20:33.732 fused_ordering(819) 00:20:33.732 fused_ordering(820) 00:20:33.991 fused_ordering(821) 00:20:33.991 fused_ordering(822) 00:20:33.991 fused_ordering(823) 00:20:33.991 fused_ordering(824) 00:20:33.991 fused_ordering(825) 00:20:33.991 fused_ordering(826) 00:20:33.991 fused_ordering(827) 00:20:33.991 fused_ordering(828) 00:20:33.991 fused_ordering(829) 00:20:33.991 fused_ordering(830) 00:20:33.991 fused_ordering(831) 00:20:33.991 fused_ordering(832) 00:20:33.991 fused_ordering(833) 00:20:33.991 fused_ordering(834) 00:20:33.991 fused_ordering(835) 00:20:33.991 fused_ordering(836) 00:20:33.991 fused_ordering(837) 00:20:33.991 fused_ordering(838) 00:20:33.991 fused_ordering(839) 00:20:33.991 fused_ordering(840) 00:20:33.991 fused_ordering(841) 00:20:33.991 fused_ordering(842) 00:20:33.991 fused_ordering(843) 00:20:33.991 fused_ordering(844) 00:20:33.991 fused_ordering(845) 00:20:33.991 fused_ordering(846) 00:20:33.991 fused_ordering(847) 00:20:33.991 fused_ordering(848) 00:20:33.991 fused_ordering(849) 00:20:33.991 fused_ordering(850) 00:20:33.991 fused_ordering(851) 00:20:33.991 fused_ordering(852) 00:20:33.991 fused_ordering(853) 00:20:33.991 fused_ordering(854) 00:20:33.991 fused_ordering(855) 00:20:33.991 fused_ordering(856) 00:20:33.991 fused_ordering(857) 00:20:33.992 fused_ordering(858) 00:20:33.992 fused_ordering(859) 00:20:33.992 fused_ordering(860) 00:20:33.992 fused_ordering(861) 00:20:33.992 fused_ordering(862) 00:20:33.992 fused_ordering(863) 00:20:33.992 fused_ordering(864) 00:20:33.992 fused_ordering(865) 00:20:33.992 fused_ordering(866) 00:20:33.992 fused_ordering(867) 00:20:33.992 fused_ordering(868) 00:20:33.992 fused_ordering(869) 00:20:33.992 fused_ordering(870) 00:20:33.992 fused_ordering(871) 00:20:33.992 fused_ordering(872) 00:20:33.992 fused_ordering(873) 00:20:33.992 fused_ordering(874) 00:20:33.992 fused_ordering(875) 00:20:33.992 fused_ordering(876) 00:20:33.992 fused_ordering(877) 00:20:33.992 fused_ordering(878) 00:20:33.992 fused_ordering(879) 00:20:33.992 fused_ordering(880) 00:20:33.992 fused_ordering(881) 00:20:33.992 fused_ordering(882) 00:20:33.992 fused_ordering(883) 00:20:33.992 fused_ordering(884) 00:20:33.992 fused_ordering(885) 00:20:33.992 fused_ordering(886) 00:20:33.992 fused_ordering(887) 00:20:33.992 fused_ordering(888) 00:20:33.992 fused_ordering(889) 00:20:33.992 fused_ordering(890) 00:20:33.992 fused_ordering(891) 00:20:33.992 fused_ordering(892) 00:20:33.992 fused_ordering(893) 00:20:33.992 fused_ordering(894) 00:20:33.992 fused_ordering(895) 00:20:33.992 fused_ordering(896) 00:20:33.992 fused_ordering(897) 00:20:33.992 fused_ordering(898) 00:20:33.992 fused_ordering(899) 00:20:33.992 fused_ordering(900) 00:20:33.992 fused_ordering(901) 00:20:33.992 fused_ordering(902) 00:20:33.992 fused_ordering(903) 00:20:33.992 fused_ordering(904) 00:20:33.992 fused_ordering(905) 00:20:33.992 fused_ordering(906) 00:20:33.992 fused_ordering(907) 00:20:33.992 fused_ordering(908) 00:20:33.992 fused_ordering(909) 00:20:33.992 fused_ordering(910) 00:20:33.992 fused_ordering(911) 00:20:33.992 fused_ordering(912) 00:20:33.992 fused_ordering(913) 00:20:33.992 fused_ordering(914) 00:20:33.992 fused_ordering(915) 00:20:33.992 fused_ordering(916) 00:20:33.992 fused_ordering(917) 00:20:33.992 fused_ordering(918) 00:20:33.992 fused_ordering(919) 00:20:33.992 fused_ordering(920) 00:20:33.992 fused_ordering(921) 00:20:33.992 fused_ordering(922) 00:20:33.992 fused_ordering(923) 00:20:33.992 fused_ordering(924) 00:20:33.992 fused_ordering(925) 00:20:33.992 fused_ordering(926) 00:20:33.992 fused_ordering(927) 00:20:33.992 fused_ordering(928) 00:20:33.992 fused_ordering(929) 00:20:33.992 fused_ordering(930) 00:20:33.992 fused_ordering(931) 00:20:33.992 fused_ordering(932) 00:20:33.992 fused_ordering(933) 00:20:33.992 fused_ordering(934) 00:20:33.992 fused_ordering(935) 00:20:33.992 fused_ordering(936) 00:20:33.992 fused_ordering(937) 00:20:33.992 fused_ordering(938) 00:20:33.992 fused_ordering(939) 00:20:33.992 fused_ordering(940) 00:20:33.992 fused_ordering(941) 00:20:33.992 fused_ordering(942) 00:20:33.992 fused_ordering(943) 00:20:33.992 fused_ordering(944) 00:20:33.992 fused_ordering(945) 00:20:33.992 fused_ordering(946) 00:20:33.992 fused_ordering(947) 00:20:33.992 fused_ordering(948) 00:20:33.992 fused_ordering(949) 00:20:33.992 fused_ordering(950) 00:20:33.992 fused_ordering(951) 00:20:33.992 fused_ordering(952) 00:20:33.992 fused_ordering(953) 00:20:33.992 fused_ordering(954) 00:20:33.992 fused_ordering(955) 00:20:33.992 fused_ordering(956) 00:20:33.992 fused_ordering(957) 00:20:33.992 fused_ordering(958) 00:20:33.992 fused_ordering(959) 00:20:33.992 fused_ordering(960) 00:20:33.992 fused_ordering(961) 00:20:33.992 fused_ordering(962) 00:20:33.992 fused_ordering(963) 00:20:33.992 fused_ordering(964) 00:20:33.992 fused_ordering(965) 00:20:33.992 fused_ordering(966) 00:20:33.992 fused_ordering(967) 00:20:33.992 fused_ordering(968) 00:20:33.992 fused_ordering(969) 00:20:33.992 fused_ordering(970) 00:20:33.992 fused_ordering(971) 00:20:33.992 fused_ordering(972) 00:20:33.992 fused_ordering(973) 00:20:33.992 fused_ordering(974) 00:20:33.992 fused_ordering(975) 00:20:33.992 fused_ordering(976) 00:20:33.992 fused_ordering(977) 00:20:33.992 fused_ordering(978) 00:20:33.992 fused_ordering(979) 00:20:33.992 fused_ordering(980) 00:20:33.992 fused_ordering(981) 00:20:33.992 fused_ordering(982) 00:20:33.992 fused_ordering(983) 00:20:33.992 fused_ordering(984) 00:20:33.992 fused_ordering(985) 00:20:33.992 fused_ordering(986) 00:20:33.992 fused_ordering(987) 00:20:33.992 fused_ordering(988) 00:20:33.992 fused_ordering(989) 00:20:33.992 fused_ordering(990) 00:20:33.992 fused_ordering(991) 00:20:33.992 fused_ordering(992) 00:20:33.992 fused_ordering(993) 00:20:33.992 fused_ordering(994) 00:20:33.992 fused_ordering(995) 00:20:33.992 fused_ordering(996) 00:20:33.992 fused_ordering(997) 00:20:33.992 fused_ordering(998) 00:20:33.992 fused_ordering(999) 00:20:33.992 fused_ordering(1000) 00:20:33.992 fused_ordering(1001) 00:20:33.992 fused_ordering(1002) 00:20:33.992 fused_ordering(1003) 00:20:33.992 fused_ordering(1004) 00:20:33.992 fused_ordering(1005) 00:20:33.992 fused_ordering(1006) 00:20:33.992 fused_ordering(1007) 00:20:33.992 fused_ordering(1008) 00:20:33.992 fused_ordering(1009) 00:20:33.992 fused_ordering(1010) 00:20:33.992 fused_ordering(1011) 00:20:33.992 fused_ordering(1012) 00:20:33.992 fused_ordering(1013) 00:20:33.992 fused_ordering(1014) 00:20:33.992 fused_ordering(1015) 00:20:33.992 fused_ordering(1016) 00:20:33.992 fused_ordering(1017) 00:20:33.992 fused_ordering(1018) 00:20:33.992 fused_ordering(1019) 00:20:33.992 fused_ordering(1020) 00:20:33.992 fused_ordering(1021) 00:20:33.992 fused_ordering(1022) 00:20:33.992 fused_ordering(1023) 00:20:33.992 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:33.992 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:33.992 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.992 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:20:33.992 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:33.992 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:33.992 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:20:33.992 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.992 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:33.992 rmmod nvme_rdma 00:20:34.252 rmmod nvme_fabrics 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1701798 ']' 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1701798 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1701798 ']' 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1701798 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1701798 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1701798' 00:20:34.252 killing process with pid 1701798 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1701798 00:20:34.252 10:53:12 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1701798 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:35.631 00:20:35.631 real 0m11.443s 00:20:35.631 user 0m6.340s 00:20:35.631 sys 0m6.678s 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:35.631 ************************************ 00:20:35.631 END TEST nvmf_fused_ordering 00:20:35.631 ************************************ 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:35.631 ************************************ 00:20:35.631 START TEST nvmf_ns_masking 00:20:35.631 ************************************ 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:20:35.631 * Looking for test storage... 00:20:35.631 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:35.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.631 --rc genhtml_branch_coverage=1 00:20:35.631 --rc genhtml_function_coverage=1 00:20:35.631 --rc genhtml_legend=1 00:20:35.631 --rc geninfo_all_blocks=1 00:20:35.631 --rc geninfo_unexecuted_blocks=1 00:20:35.631 00:20:35.631 ' 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:35.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.631 --rc genhtml_branch_coverage=1 00:20:35.631 --rc genhtml_function_coverage=1 00:20:35.631 --rc genhtml_legend=1 00:20:35.631 --rc geninfo_all_blocks=1 00:20:35.631 --rc geninfo_unexecuted_blocks=1 00:20:35.631 00:20:35.631 ' 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:35.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.631 --rc genhtml_branch_coverage=1 00:20:35.631 --rc genhtml_function_coverage=1 00:20:35.631 --rc genhtml_legend=1 00:20:35.631 --rc geninfo_all_blocks=1 00:20:35.631 --rc geninfo_unexecuted_blocks=1 00:20:35.631 00:20:35.631 ' 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:35.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.631 --rc genhtml_branch_coverage=1 00:20:35.631 --rc genhtml_function_coverage=1 00:20:35.631 --rc genhtml_legend=1 00:20:35.631 --rc geninfo_all_blocks=1 00:20:35.631 --rc geninfo_unexecuted_blocks=1 00:20:35.631 00:20:35.631 ' 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.631 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.632 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9e1294e9-0657-4ff6-942c-f8e9cfc7c701 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a06d4f78-a050-4dfa-924d-3b3887ae0be2 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=63836c85-514d-4329-9ea5-693351627d10 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.632 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.758 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:43.759 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:43.759 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:43.759 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:43.759 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:43.759 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:43.759 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:43.759 altname enp217s0f0np0 00:20:43.759 altname ens818f0np0 00:20:43.759 inet 192.168.100.8/24 scope global mlx_0_0 00:20:43.759 valid_lft forever preferred_lft forever 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:43.759 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:43.760 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:43.760 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:43.760 altname enp217s0f1np1 00:20:43.760 altname ens818f1np1 00:20:43.760 inet 192.168.100.9/24 scope global mlx_0_1 00:20:43.760 valid_lft forever preferred_lft forever 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:43.760 192.168.100.9' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:43.760 192.168.100.9' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:43.760 192.168.100.9' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1706296 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1706296 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1706296 ']' 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.760 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:44.020 [2024-11-28 10:53:22.648585] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:44.020 [2024-11-28 10:53:22.648685] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.020 [2024-11-28 10:53:22.799119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.020 [2024-11-28 10:53:22.896819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.020 [2024-11-28 10:53:22.896866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.020 [2024-11-28 10:53:22.896879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.020 [2024-11-28 10:53:22.896894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.020 [2024-11-28 10:53:22.896904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.020 [2024-11-28 10:53:22.898234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.625 10:53:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.625 10:53:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:20:44.625 10:53:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.625 10:53:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.625 10:53:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:44.625 10:53:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.625 10:53:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:44.884 [2024-11-28 10:53:23.680223] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f4b33b16940) succeed. 00:20:44.884 [2024-11-28 10:53:23.689779] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f4b339bd940) succeed. 00:20:45.143 10:53:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:20:45.143 10:53:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:20:45.143 10:53:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:45.143 Malloc1 00:20:45.403 10:53:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:45.403 Malloc2 00:20:45.403 10:53:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:45.662 10:53:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:20:45.921 10:53:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:46.181 [2024-11-28 10:53:24.833068] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:46.181 10:53:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:20:46.181 10:53:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 63836c85-514d-4329-9ea5-693351627d10 -a 192.168.100.8 -s 4420 -i 4 00:20:46.445 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:20:46.445 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:46.445 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:46.445 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:46.445 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:48.349 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:48.349 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:48.349 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:48.349 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:48.349 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:48.349 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:48.349 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:48.349 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:48.608 [ 0]:0x1 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55363102145948bfa4c200f7e34c0b6b 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55363102145948bfa4c200f7e34c0b6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:48.608 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:48.868 [ 0]:0x1 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55363102145948bfa4c200f7e34c0b6b 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55363102145948bfa4c200f7e34c0b6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:48.868 [ 1]:0x2 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd42db94c754a0b874de7b1e5e0e3e7 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd42db94c754a0b874de7b1e5e0e3e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:20:48.868 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:49.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:49.127 10:53:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:49.385 10:53:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:49.645 10:53:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:20:49.645 10:53:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 63836c85-514d-4329-9ea5-693351627d10 -a 192.168.100.8 -s 4420 -i 4 00:20:49.904 10:53:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:49.904 10:53:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:49.904 10:53:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:49.904 10:53:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:20:49.904 10:53:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:20:49.904 10:53:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:51.810 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:51.810 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:51.810 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:51.810 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:51.810 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:51.810 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:51.810 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:51.810 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:52.069 [ 0]:0x2 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd42db94c754a0b874de7b1e5e0e3e7 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd42db94c754a0b874de7b1e5e0e3e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:52.069 10:53:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:52.329 [ 0]:0x1 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55363102145948bfa4c200f7e34c0b6b 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55363102145948bfa4c200f7e34c0b6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:52.329 [ 1]:0x2 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd42db94c754a0b874de7b1e5e0e3e7 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd42db94c754a0b874de7b1e5e0e3e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:52.329 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:52.588 [ 0]:0x2 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd42db94c754a0b874de7b1e5e0e3e7 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd42db94c754a0b874de7b1e5e0e3e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:20:52.588 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:53.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:53.157 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:53.157 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:20:53.157 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 63836c85-514d-4329-9ea5-693351627d10 -a 192.168.100.8 -s 4420 -i 4 00:20:53.417 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:53.417 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:53.417 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:53.417 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:20:53.417 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:20:53.417 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:55.957 [ 0]:0x1 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55363102145948bfa4c200f7e34c0b6b 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55363102145948bfa4c200f7e34c0b6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:55.957 [ 1]:0x2 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd42db94c754a0b874de7b1e5e0e3e7 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd42db94c754a0b874de7b1e5e0e3e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:55.957 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:55.958 [ 0]:0x2 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd42db94c754a0b874de7b1e5e0e3e7 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd42db94c754a0b874de7b1e5e0e3e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:20:55.958 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:56.218 [2024-11-28 10:53:34.935899] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:56.218 request: 00:20:56.218 { 00:20:56.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.218 "nsid": 2, 00:20:56.218 "host": "nqn.2016-06.io.spdk:host1", 00:20:56.218 "method": "nvmf_ns_remove_host", 00:20:56.218 "req_id": 1 00:20:56.218 } 00:20:56.218 Got JSON-RPC error response 00:20:56.218 response: 00:20:56.218 { 00:20:56.218 "code": -32602, 00:20:56.218 "message": "Invalid parameters" 00:20:56.218 } 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:56.218 10:53:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:56.218 [ 0]:0x2 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd42db94c754a0b874de7b1e5e0e3e7 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd42db94c754a0b874de7b1e5e0e3e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:20:56.218 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:56.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:56.787 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1708586 00:20:56.787 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:20:56.787 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.787 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1708586 /var/tmp/host.sock 00:20:56.787 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1708586 ']' 00:20:56.787 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:56.787 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.787 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:56.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:56.787 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.787 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:56.787 [2024-11-28 10:53:35.502028] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:56.787 [2024-11-28 10:53:35.502120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708586 ] 00:20:56.787 [2024-11-28 10:53:35.655161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.046 [2024-11-28 10:53:35.757998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.986 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.986 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:20:57.986 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:57.986 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:57.986 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9e1294e9-0657-4ff6-942c-f8e9cfc7c701 00:20:57.986 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:58.245 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9E1294E906574FF6942CF8E9CFC7C701 -i 00:20:58.245 10:53:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a06d4f78-a050-4dfa-924d-3b3887ae0be2 00:20:58.245 10:53:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:58.245 10:53:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A06D4F78A0504DFA924D3B3887AE0BE2 -i 00:20:58.505 10:53:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:58.765 10:53:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:20:58.765 10:53:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:58.766 10:53:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:59.025 nvme0n1 00:20:59.285 10:53:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:59.285 10:53:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:59.285 nvme1n2 00:20:59.285 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:20:59.285 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:20:59.285 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:20:59.285 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:59.285 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:20:59.544 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:20:59.544 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:20:59.544 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:20:59.544 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:20:59.803 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9e1294e9-0657-4ff6-942c-f8e9cfc7c701 == \9\e\1\2\9\4\e\9\-\0\6\5\7\-\4\f\f\6\-\9\4\2\c\-\f\8\e\9\c\f\c\7\c\7\0\1 ]] 00:20:59.803 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:20:59.803 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:20:59.803 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:21:00.063 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a06d4f78-a050-4dfa-924d-3b3887ae0be2 == \a\0\6\d\4\f\7\8\-\a\0\5\0\-\4\d\f\a\-\9\2\4\d\-\3\b\3\8\8\7\a\e\0\b\e\2 ]] 00:21:00.063 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:00.063 10:53:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 9e1294e9-0657-4ff6-942c-f8e9cfc7c701 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9E1294E906574FF6942CF8E9CFC7C701 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9E1294E906574FF6942CF8E9CFC7C701 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:21:00.323 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9E1294E906574FF6942CF8E9CFC7C701 00:21:00.583 [2024-11-28 10:53:39.307161] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:21:00.583 [2024-11-28 10:53:39.307210] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:21:00.583 [2024-11-28 10:53:39.307225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.583 request: 00:21:00.583 { 00:21:00.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.583 "namespace": { 00:21:00.583 "bdev_name": "invalid", 00:21:00.583 "nsid": 1, 00:21:00.583 "nguid": "9E1294E906574FF6942CF8E9CFC7C701", 00:21:00.583 "no_auto_visible": false, 00:21:00.583 "hide_metadata": false 00:21:00.583 }, 00:21:00.583 "method": "nvmf_subsystem_add_ns", 00:21:00.583 "req_id": 1 00:21:00.583 } 00:21:00.583 Got JSON-RPC error response 00:21:00.583 response: 00:21:00.583 { 00:21:00.583 "code": -32602, 00:21:00.583 "message": "Invalid parameters" 00:21:00.583 } 00:21:00.583 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:00.583 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:00.583 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:00.583 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:00.583 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 9e1294e9-0657-4ff6-942c-f8e9cfc7c701 00:21:00.583 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:00.583 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9E1294E906574FF6942CF8E9CFC7C701 -i 00:21:00.842 10:53:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:21:02.747 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:21:02.747 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:21:02.748 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1708586 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1708586 ']' 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1708586 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1708586 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1708586' 00:21:03.006 killing process with pid 1708586 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1708586 00:21:03.006 10:53:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1708586 00:21:05.542 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.542 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:05.542 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:21:05.542 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.542 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:21:05.542 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:05.542 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:05.543 rmmod nvme_rdma 00:21:05.543 rmmod nvme_fabrics 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1706296 ']' 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1706296 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1706296 ']' 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1706296 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706296 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706296' 00:21:05.543 killing process with pid 1706296 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1706296 00:21:05.543 10:53:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1706296 00:21:07.450 10:53:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.450 10:53:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:07.450 00:21:07.450 real 0m31.588s 00:21:07.450 user 0m39.174s 00:21:07.450 sys 0m9.165s 00:21:07.450 10:53:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.450 10:53:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:07.450 ************************************ 00:21:07.450 END TEST nvmf_ns_masking 00:21:07.450 ************************************ 00:21:07.450 10:53:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:21:07.450 10:53:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:21:07.450 10:53:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:07.450 10:53:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.450 10:53:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:07.450 ************************************ 00:21:07.450 START TEST nvmf_nvme_cli 00:21:07.450 ************************************ 00:21:07.450 10:53:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:21:07.450 * Looking for test storage... 00:21:07.450 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.450 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:07.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.451 --rc genhtml_branch_coverage=1 00:21:07.451 --rc genhtml_function_coverage=1 00:21:07.451 --rc genhtml_legend=1 00:21:07.451 --rc geninfo_all_blocks=1 00:21:07.451 --rc geninfo_unexecuted_blocks=1 00:21:07.451 00:21:07.451 ' 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:07.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.451 --rc genhtml_branch_coverage=1 00:21:07.451 --rc genhtml_function_coverage=1 00:21:07.451 --rc genhtml_legend=1 00:21:07.451 --rc geninfo_all_blocks=1 00:21:07.451 --rc geninfo_unexecuted_blocks=1 00:21:07.451 00:21:07.451 ' 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:07.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.451 --rc genhtml_branch_coverage=1 00:21:07.451 --rc genhtml_function_coverage=1 00:21:07.451 --rc genhtml_legend=1 00:21:07.451 --rc geninfo_all_blocks=1 00:21:07.451 --rc geninfo_unexecuted_blocks=1 00:21:07.451 00:21:07.451 ' 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:07.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.451 --rc genhtml_branch_coverage=1 00:21:07.451 --rc genhtml_function_coverage=1 00:21:07.451 --rc genhtml_legend=1 00:21:07.451 --rc geninfo_all_blocks=1 00:21:07.451 --rc geninfo_unexecuted_blocks=1 00:21:07.451 00:21:07.451 ' 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.451 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:07.452 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:21:07.452 10:53:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:17.444 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:17.444 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:17.444 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:17.444 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:21:17.444 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:17.445 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:17.445 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:17.445 altname enp217s0f0np0 00:21:17.445 altname ens818f0np0 00:21:17.445 inet 192.168.100.8/24 scope global mlx_0_0 00:21:17.445 valid_lft forever preferred_lft forever 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:17.445 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:17.445 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:17.445 altname enp217s0f1np1 00:21:17.445 altname ens818f1np1 00:21:17.445 inet 192.168.100.9/24 scope global mlx_0_1 00:21:17.445 valid_lft forever preferred_lft forever 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:17.445 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:17.446 192.168.100.9' 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:17.446 192.168.100.9' 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:17.446 192.168.100.9' 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1714589 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1714589 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1714589 ']' 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.446 10:53:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.446 [2024-11-28 10:53:55.066222] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:17.446 [2024-11-28 10:53:55.066319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.446 [2024-11-28 10:53:55.218955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.446 [2024-11-28 10:53:55.319760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.446 [2024-11-28 10:53:55.319816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.446 [2024-11-28 10:53:55.319828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.446 [2024-11-28 10:53:55.319857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.446 [2024-11-28 10:53:55.319867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.446 [2024-11-28 10:53:55.322561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.446 [2024-11-28 10:53:55.322645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.446 [2024-11-28 10:53:55.322664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.446 [2024-11-28 10:53:55.322674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.446 10:53:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.446 10:53:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:21:17.446 10:53:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.446 10:53:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.446 10:53:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.446 10:53:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.446 10:53:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:17.446 10:53:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.446 10:53:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.446 [2024-11-28 10:53:55.956626] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fbb7afa4940) succeed. 00:21:17.446 [2024-11-28 10:53:55.966527] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fbb7af60940) succeed. 00:21:17.446 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.446 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:17.446 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.446 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.446 Malloc0 00:21:17.446 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.446 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:17.446 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.446 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.706 Malloc1 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.706 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.707 [2024-11-28 10:53:56.408860] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:21:17.707 00:21:17.707 Discovery Log Number of Records 2, Generation counter 2 00:21:17.707 =====Discovery Log Entry 0====== 00:21:17.707 trtype: rdma 00:21:17.707 adrfam: ipv4 00:21:17.707 subtype: current discovery subsystem 00:21:17.707 treq: not required 00:21:17.707 portid: 0 00:21:17.707 trsvcid: 4420 00:21:17.707 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:17.707 traddr: 192.168.100.8 00:21:17.707 eflags: explicit discovery connections, duplicate discovery information 00:21:17.707 rdma_prtype: not specified 00:21:17.707 rdma_qptype: connected 00:21:17.707 rdma_cms: rdma-cm 00:21:17.707 rdma_pkey: 0x0000 00:21:17.707 =====Discovery Log Entry 1====== 00:21:17.707 trtype: rdma 00:21:17.707 adrfam: ipv4 00:21:17.707 subtype: nvme subsystem 00:21:17.707 treq: not required 00:21:17.707 portid: 0 00:21:17.707 trsvcid: 4420 00:21:17.707 subnqn: nqn.2016-06.io.spdk:cnode1 00:21:17.707 traddr: 192.168.100.8 00:21:17.707 eflags: none 00:21:17.707 rdma_prtype: not specified 00:21:17.707 rdma_qptype: connected 00:21:17.707 rdma_cms: rdma-cm 00:21:17.707 rdma_pkey: 0x0000 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:21:17.707 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:19.086 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:19.087 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:21:19.087 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:19.087 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:21:19.087 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:21:19.087 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:21:20.994 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:21:20.995 /dev/nvme0n2 ]] 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:21:20.995 10:53:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:21.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:21.933 rmmod nvme_rdma 00:21:21.933 rmmod nvme_fabrics 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1714589 ']' 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1714589 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1714589 ']' 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1714589 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1714589 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1714589' 00:21:21.933 killing process with pid 1714589 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1714589 00:21:21.933 10:54:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1714589 00:21:24.469 10:54:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:24.470 10:54:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:24.470 00:21:24.470 real 0m16.902s 00:21:24.470 user 0m30.661s 00:21:24.470 sys 0m7.628s 00:21:24.470 10:54:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.470 10:54:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:24.470 ************************************ 00:21:24.470 END TEST nvmf_nvme_cli 00:21:24.470 ************************************ 00:21:24.470 10:54:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:21:24.470 10:54:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:21:24.470 10:54:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:24.470 10:54:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.470 10:54:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.470 ************************************ 00:21:24.470 START TEST nvmf_auth_target 00:21:24.470 ************************************ 00:21:24.470 10:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:21:24.470 * Looking for test storage... 00:21:24.470 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:24.470 10:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:24.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.470 --rc genhtml_branch_coverage=1 00:21:24.470 --rc genhtml_function_coverage=1 00:21:24.470 --rc genhtml_legend=1 00:21:24.470 --rc geninfo_all_blocks=1 00:21:24.470 --rc geninfo_unexecuted_blocks=1 00:21:24.470 00:21:24.470 ' 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:24.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.470 --rc genhtml_branch_coverage=1 00:21:24.470 --rc genhtml_function_coverage=1 00:21:24.470 --rc genhtml_legend=1 00:21:24.470 --rc geninfo_all_blocks=1 00:21:24.470 --rc geninfo_unexecuted_blocks=1 00:21:24.470 00:21:24.470 ' 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:24.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.470 --rc genhtml_branch_coverage=1 00:21:24.470 --rc genhtml_function_coverage=1 00:21:24.470 --rc genhtml_legend=1 00:21:24.470 --rc geninfo_all_blocks=1 00:21:24.470 --rc geninfo_unexecuted_blocks=1 00:21:24.470 00:21:24.470 ' 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:24.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.470 --rc genhtml_branch_coverage=1 00:21:24.470 --rc genhtml_function_coverage=1 00:21:24.470 --rc genhtml_legend=1 00:21:24.470 --rc geninfo_all_blocks=1 00:21:24.470 --rc geninfo_unexecuted_blocks=1 00:21:24.470 00:21:24.470 ' 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.470 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.471 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:24.471 10:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.602 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:32.603 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:32.603 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:32.603 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:32.603 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:32.603 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:32.864 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:32.864 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:32.864 altname enp217s0f0np0 00:21:32.864 altname ens818f0np0 00:21:32.864 inet 192.168.100.8/24 scope global mlx_0_0 00:21:32.864 valid_lft forever preferred_lft forever 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:32.864 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:32.864 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:32.864 altname enp217s0f1np1 00:21:32.864 altname ens818f1np1 00:21:32.864 inet 192.168.100.9/24 scope global mlx_0_1 00:21:32.864 valid_lft forever preferred_lft forever 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:32.864 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:32.865 192.168.100.9' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:32.865 192.168.100.9' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:32.865 192.168.100.9' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1720484 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1720484 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1720484 ']' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.865 10:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1720760 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:21:33.804 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=036d08bf418457f4f9a118d54efbdb0aaaa66a637869f2fc 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wMD 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 036d08bf418457f4f9a118d54efbdb0aaaa66a637869f2fc 0 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 036d08bf418457f4f9a118d54efbdb0aaaa66a637869f2fc 0 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=036d08bf418457f4f9a118d54efbdb0aaaa66a637869f2fc 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wMD 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wMD 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.wMD 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:33.805 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4df791202a87e1d7868afa7637a142d5b2af5c50749df16bdece11cc7e3bab1e 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ChZ 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4df791202a87e1d7868afa7637a142d5b2af5c50749df16bdece11cc7e3bab1e 3 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4df791202a87e1d7868afa7637a142d5b2af5c50749df16bdece11cc7e3bab1e 3 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4df791202a87e1d7868afa7637a142d5b2af5c50749df16bdece11cc7e3bab1e 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ChZ 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ChZ 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ChZ 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=838594f52b47a0adc2afc2312442efb7 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.lgf 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 838594f52b47a0adc2afc2312442efb7 1 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 838594f52b47a0adc2afc2312442efb7 1 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=838594f52b47a0adc2afc2312442efb7 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.lgf 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.lgf 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.lgf 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c7c2d80631157b77744b7799ea5d33fef97229edd72610a1 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VVl 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c7c2d80631157b77744b7799ea5d33fef97229edd72610a1 2 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c7c2d80631157b77744b7799ea5d33fef97229edd72610a1 2 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:34.065 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c7c2d80631157b77744b7799ea5d33fef97229edd72610a1 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VVl 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VVl 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.VVl 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c7975c83de29f09b49f02d5c342bc3744675d0bf87ad9669 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.nX4 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c7975c83de29f09b49f02d5c342bc3744675d0bf87ad9669 2 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c7975c83de29f09b49f02d5c342bc3744675d0bf87ad9669 2 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c7975c83de29f09b49f02d5c342bc3744675d0bf87ad9669 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.nX4 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.nX4 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.nX4 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=191cdbf5f02cb66ef7a848bbc0aee072 00:21:34.066 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HkE 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 191cdbf5f02cb66ef7a848bbc0aee072 1 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 191cdbf5f02cb66ef7a848bbc0aee072 1 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=191cdbf5f02cb66ef7a848bbc0aee072 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HkE 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HkE 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.HkE 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:34.326 10:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:34.326 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:34.326 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2be5fb3fb8d055cfee0ba0df11b022e38ee327519c83527826184e103c5f95b1 00:21:34.326 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:34.326 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5Si 00:21:34.326 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2be5fb3fb8d055cfee0ba0df11b022e38ee327519c83527826184e103c5f95b1 3 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2be5fb3fb8d055cfee0ba0df11b022e38ee327519c83527826184e103c5f95b1 3 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2be5fb3fb8d055cfee0ba0df11b022e38ee327519c83527826184e103c5f95b1 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5Si 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5Si 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.5Si 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1720484 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1720484 ']' 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.327 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.586 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.587 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:34.587 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1720760 /var/tmp/host.sock 00:21:34.587 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1720760 ']' 00:21:34.587 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:34.587 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.587 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:34.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:34.587 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.587 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.846 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.846 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:34.846 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:34.846 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.846 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.106 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.106 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:35.106 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wMD 00:21:35.106 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.106 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.106 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.106 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wMD 00:21:35.106 10:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wMD 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ChZ ]] 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ChZ 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ChZ 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ChZ 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.lgf 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.lgf 00:21:35.366 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.lgf 00:21:35.626 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.VVl ]] 00:21:35.626 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VVl 00:21:35.626 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.626 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.626 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.626 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VVl 00:21:35.626 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VVl 00:21:35.885 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:35.886 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nX4 00:21:35.886 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.886 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.886 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.886 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.nX4 00:21:35.886 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.nX4 00:21:36.145 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.HkE ]] 00:21:36.145 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HkE 00:21:36.145 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.145 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.145 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.145 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HkE 00:21:36.145 10:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HkE 00:21:36.145 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:36.145 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5Si 00:21:36.145 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.145 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.145 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.145 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.5Si 00:21:36.145 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.5Si 00:21:36.414 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:36.414 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:36.414 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.414 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.414 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:36.414 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.678 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.937 00:21:36.937 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.937 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.937 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.197 { 00:21:37.197 "cntlid": 1, 00:21:37.197 "qid": 0, 00:21:37.197 "state": "enabled", 00:21:37.197 "thread": "nvmf_tgt_poll_group_000", 00:21:37.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:37.197 "listen_address": { 00:21:37.197 "trtype": "RDMA", 00:21:37.197 "adrfam": "IPv4", 00:21:37.197 "traddr": "192.168.100.8", 00:21:37.197 "trsvcid": "4420" 00:21:37.197 }, 00:21:37.197 "peer_address": { 00:21:37.197 "trtype": "RDMA", 00:21:37.197 "adrfam": "IPv4", 00:21:37.197 "traddr": "192.168.100.8", 00:21:37.197 "trsvcid": "56531" 00:21:37.197 }, 00:21:37.197 "auth": { 00:21:37.197 "state": "completed", 00:21:37.197 "digest": "sha256", 00:21:37.197 "dhgroup": "null" 00:21:37.197 } 00:21:37.197 } 00:21:37.197 ]' 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:37.197 10:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.197 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.197 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.197 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.456 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:21:37.456 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:21:38.024 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.348 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:38.348 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.348 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.348 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.348 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.348 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:38.348 10:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.348 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.691 00:21:38.691 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.691 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.691 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.950 { 00:21:38.950 "cntlid": 3, 00:21:38.950 "qid": 0, 00:21:38.950 "state": "enabled", 00:21:38.950 "thread": "nvmf_tgt_poll_group_000", 00:21:38.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:38.950 "listen_address": { 00:21:38.950 "trtype": "RDMA", 00:21:38.950 "adrfam": "IPv4", 00:21:38.950 "traddr": "192.168.100.8", 00:21:38.950 "trsvcid": "4420" 00:21:38.950 }, 00:21:38.950 "peer_address": { 00:21:38.950 "trtype": "RDMA", 00:21:38.950 "adrfam": "IPv4", 00:21:38.950 "traddr": "192.168.100.8", 00:21:38.950 "trsvcid": "56012" 00:21:38.950 }, 00:21:38.950 "auth": { 00:21:38.950 "state": "completed", 00:21:38.950 "digest": "sha256", 00:21:38.950 "dhgroup": "null" 00:21:38.950 } 00:21:38.950 } 00:21:38.950 ]' 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.950 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.209 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:21:39.209 10:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:21:39.774 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.032 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:40.032 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.032 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.032 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.032 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.032 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:40.032 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.290 10:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.290 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.549 { 00:21:40.549 "cntlid": 5, 00:21:40.549 "qid": 0, 00:21:40.549 "state": "enabled", 00:21:40.549 "thread": "nvmf_tgt_poll_group_000", 00:21:40.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:40.549 "listen_address": { 00:21:40.549 "trtype": "RDMA", 00:21:40.549 "adrfam": "IPv4", 00:21:40.549 "traddr": "192.168.100.8", 00:21:40.549 "trsvcid": "4420" 00:21:40.549 }, 00:21:40.549 "peer_address": { 00:21:40.549 "trtype": "RDMA", 00:21:40.549 "adrfam": "IPv4", 00:21:40.549 "traddr": "192.168.100.8", 00:21:40.549 "trsvcid": "46055" 00:21:40.549 }, 00:21:40.549 "auth": { 00:21:40.549 "state": "completed", 00:21:40.549 "digest": "sha256", 00:21:40.549 "dhgroup": "null" 00:21:40.549 } 00:21:40.549 } 00:21:40.549 ]' 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.549 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:40.807 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.807 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.807 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.807 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.807 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.807 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.067 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:21:41.067 10:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:21:41.635 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.635 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:41.635 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.635 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.635 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.635 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.635 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:41.635 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.894 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.154 00:21:42.154 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.154 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.154 10:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.413 { 00:21:42.413 "cntlid": 7, 00:21:42.413 "qid": 0, 00:21:42.413 "state": "enabled", 00:21:42.413 "thread": "nvmf_tgt_poll_group_000", 00:21:42.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:42.413 "listen_address": { 00:21:42.413 "trtype": "RDMA", 00:21:42.413 "adrfam": "IPv4", 00:21:42.413 "traddr": "192.168.100.8", 00:21:42.413 "trsvcid": "4420" 00:21:42.413 }, 00:21:42.413 "peer_address": { 00:21:42.413 "trtype": "RDMA", 00:21:42.413 "adrfam": "IPv4", 00:21:42.413 "traddr": "192.168.100.8", 00:21:42.413 "trsvcid": "53306" 00:21:42.413 }, 00:21:42.413 "auth": { 00:21:42.413 "state": "completed", 00:21:42.413 "digest": "sha256", 00:21:42.413 "dhgroup": "null" 00:21:42.413 } 00:21:42.413 } 00:21:42.413 ]' 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.413 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.672 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:21:42.672 10:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:21:43.240 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.500 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:43.500 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.500 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.500 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.500 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.500 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.500 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.500 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.760 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.019 00:21:44.019 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.019 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.019 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.019 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.019 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.019 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.019 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.019 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.019 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.019 { 00:21:44.019 "cntlid": 9, 00:21:44.019 "qid": 0, 00:21:44.019 "state": "enabled", 00:21:44.019 "thread": "nvmf_tgt_poll_group_000", 00:21:44.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:44.019 "listen_address": { 00:21:44.019 "trtype": "RDMA", 00:21:44.019 "adrfam": "IPv4", 00:21:44.019 "traddr": "192.168.100.8", 00:21:44.019 "trsvcid": "4420" 00:21:44.019 }, 00:21:44.019 "peer_address": { 00:21:44.019 "trtype": "RDMA", 00:21:44.019 "adrfam": "IPv4", 00:21:44.019 "traddr": "192.168.100.8", 00:21:44.019 "trsvcid": "53539" 00:21:44.019 }, 00:21:44.019 "auth": { 00:21:44.019 "state": "completed", 00:21:44.019 "digest": "sha256", 00:21:44.019 "dhgroup": "ffdhe2048" 00:21:44.019 } 00:21:44.019 } 00:21:44.019 ]' 00:21:44.019 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.279 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:44.279 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.279 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.279 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.279 10:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.279 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.279 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.538 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:21:44.538 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:21:45.108 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.108 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:45.108 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.108 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.108 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.108 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.108 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:45.108 10:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.367 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.626 00:21:45.626 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.626 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.626 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.884 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.884 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.884 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.884 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.884 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.884 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.884 { 00:21:45.884 "cntlid": 11, 00:21:45.884 "qid": 0, 00:21:45.884 "state": "enabled", 00:21:45.884 "thread": "nvmf_tgt_poll_group_000", 00:21:45.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:45.884 "listen_address": { 00:21:45.884 "trtype": "RDMA", 00:21:45.884 "adrfam": "IPv4", 00:21:45.884 "traddr": "192.168.100.8", 00:21:45.884 "trsvcid": "4420" 00:21:45.884 }, 00:21:45.884 "peer_address": { 00:21:45.884 "trtype": "RDMA", 00:21:45.884 "adrfam": "IPv4", 00:21:45.884 "traddr": "192.168.100.8", 00:21:45.884 "trsvcid": "40387" 00:21:45.884 }, 00:21:45.884 "auth": { 00:21:45.885 "state": "completed", 00:21:45.885 "digest": "sha256", 00:21:45.885 "dhgroup": "ffdhe2048" 00:21:45.885 } 00:21:45.885 } 00:21:45.885 ]' 00:21:45.885 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.885 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.885 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.885 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.885 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.885 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.885 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.885 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.143 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:21:46.143 10:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:21:46.712 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.970 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:46.970 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.970 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.970 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.971 10:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.229 00:21:47.229 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.229 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.229 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.489 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.489 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.489 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.489 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.489 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.489 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.489 { 00:21:47.489 "cntlid": 13, 00:21:47.489 "qid": 0, 00:21:47.489 "state": "enabled", 00:21:47.489 "thread": "nvmf_tgt_poll_group_000", 00:21:47.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:47.489 "listen_address": { 00:21:47.489 "trtype": "RDMA", 00:21:47.489 "adrfam": "IPv4", 00:21:47.489 "traddr": "192.168.100.8", 00:21:47.489 "trsvcid": "4420" 00:21:47.489 }, 00:21:47.489 "peer_address": { 00:21:47.489 "trtype": "RDMA", 00:21:47.489 "adrfam": "IPv4", 00:21:47.489 "traddr": "192.168.100.8", 00:21:47.489 "trsvcid": "49728" 00:21:47.489 }, 00:21:47.489 "auth": { 00:21:47.489 "state": "completed", 00:21:47.489 "digest": "sha256", 00:21:47.489 "dhgroup": "ffdhe2048" 00:21:47.489 } 00:21:47.489 } 00:21:47.489 ]' 00:21:47.489 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.489 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.489 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.748 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.748 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.748 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.748 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.748 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.006 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:21:48.006 10:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:21:48.575 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.575 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:48.575 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.575 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.575 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.575 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.575 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:48.575 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.834 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.093 00:21:49.093 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.093 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.093 10:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.352 { 00:21:49.352 "cntlid": 15, 00:21:49.352 "qid": 0, 00:21:49.352 "state": "enabled", 00:21:49.352 "thread": "nvmf_tgt_poll_group_000", 00:21:49.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:49.352 "listen_address": { 00:21:49.352 "trtype": "RDMA", 00:21:49.352 "adrfam": "IPv4", 00:21:49.352 "traddr": "192.168.100.8", 00:21:49.352 "trsvcid": "4420" 00:21:49.352 }, 00:21:49.352 "peer_address": { 00:21:49.352 "trtype": "RDMA", 00:21:49.352 "adrfam": "IPv4", 00:21:49.352 "traddr": "192.168.100.8", 00:21:49.352 "trsvcid": "35556" 00:21:49.352 }, 00:21:49.352 "auth": { 00:21:49.352 "state": "completed", 00:21:49.352 "digest": "sha256", 00:21:49.352 "dhgroup": "ffdhe2048" 00:21:49.352 } 00:21:49.352 } 00:21:49.352 ]' 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.352 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.611 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:21:49.611 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:21:50.178 10:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.438 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:50.438 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.438 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.438 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.438 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.438 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.438 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.439 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.698 00:21:50.698 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.698 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.698 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.957 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.957 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.957 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.957 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.957 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.957 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.957 { 00:21:50.957 "cntlid": 17, 00:21:50.957 "qid": 0, 00:21:50.957 "state": "enabled", 00:21:50.957 "thread": "nvmf_tgt_poll_group_000", 00:21:50.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:50.957 "listen_address": { 00:21:50.957 "trtype": "RDMA", 00:21:50.957 "adrfam": "IPv4", 00:21:50.957 "traddr": "192.168.100.8", 00:21:50.957 "trsvcid": "4420" 00:21:50.957 }, 00:21:50.957 "peer_address": { 00:21:50.957 "trtype": "RDMA", 00:21:50.957 "adrfam": "IPv4", 00:21:50.957 "traddr": "192.168.100.8", 00:21:50.957 "trsvcid": "45348" 00:21:50.957 }, 00:21:50.957 "auth": { 00:21:50.957 "state": "completed", 00:21:50.957 "digest": "sha256", 00:21:50.957 "dhgroup": "ffdhe3072" 00:21:50.957 } 00:21:50.957 } 00:21:50.957 ]' 00:21:50.957 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.957 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.957 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.217 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.217 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.217 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.217 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.217 10:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.476 10:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:21:51.476 10:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:21:52.044 10:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.044 10:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:52.044 10:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.044 10:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.044 10:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.044 10:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.044 10:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:52.044 10:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.304 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.564 00:21:52.564 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.564 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.564 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.824 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.824 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.824 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.824 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.824 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.824 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.824 { 00:21:52.824 "cntlid": 19, 00:21:52.824 "qid": 0, 00:21:52.824 "state": "enabled", 00:21:52.824 "thread": "nvmf_tgt_poll_group_000", 00:21:52.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:52.824 "listen_address": { 00:21:52.824 "trtype": "RDMA", 00:21:52.824 "adrfam": "IPv4", 00:21:52.824 "traddr": "192.168.100.8", 00:21:52.824 "trsvcid": "4420" 00:21:52.824 }, 00:21:52.824 "peer_address": { 00:21:52.824 "trtype": "RDMA", 00:21:52.824 "adrfam": "IPv4", 00:21:52.824 "traddr": "192.168.100.8", 00:21:52.824 "trsvcid": "53101" 00:21:52.824 }, 00:21:52.824 "auth": { 00:21:52.824 "state": "completed", 00:21:52.824 "digest": "sha256", 00:21:52.824 "dhgroup": "ffdhe3072" 00:21:52.824 } 00:21:52.824 } 00:21:52.824 ]' 00:21:52.825 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.825 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.825 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.825 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.825 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.825 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.825 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.825 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.084 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:21:53.084 10:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:21:53.652 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.911 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:53.911 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.911 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.911 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.911 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.911 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:53.911 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.170 10:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.430 00:21:54.430 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.430 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.430 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.430 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.430 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.430 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.430 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.430 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.430 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.430 { 00:21:54.430 "cntlid": 21, 00:21:54.430 "qid": 0, 00:21:54.430 "state": "enabled", 00:21:54.430 "thread": "nvmf_tgt_poll_group_000", 00:21:54.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:54.430 "listen_address": { 00:21:54.430 "trtype": "RDMA", 00:21:54.430 "adrfam": "IPv4", 00:21:54.430 "traddr": "192.168.100.8", 00:21:54.430 "trsvcid": "4420" 00:21:54.430 }, 00:21:54.430 "peer_address": { 00:21:54.430 "trtype": "RDMA", 00:21:54.430 "adrfam": "IPv4", 00:21:54.430 "traddr": "192.168.100.8", 00:21:54.430 "trsvcid": "57720" 00:21:54.430 }, 00:21:54.430 "auth": { 00:21:54.430 "state": "completed", 00:21:54.430 "digest": "sha256", 00:21:54.430 "dhgroup": "ffdhe3072" 00:21:54.430 } 00:21:54.430 } 00:21:54.430 ]' 00:21:54.430 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.690 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.690 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.690 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.690 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.690 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.690 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.690 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.948 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:21:54.948 10:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:21:55.512 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.512 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:55.512 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.512 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.512 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.512 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.512 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:55.512 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.770 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.028 00:21:56.028 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.028 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.028 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.286 { 00:21:56.286 "cntlid": 23, 00:21:56.286 "qid": 0, 00:21:56.286 "state": "enabled", 00:21:56.286 "thread": "nvmf_tgt_poll_group_000", 00:21:56.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:56.286 "listen_address": { 00:21:56.286 "trtype": "RDMA", 00:21:56.286 "adrfam": "IPv4", 00:21:56.286 "traddr": "192.168.100.8", 00:21:56.286 "trsvcid": "4420" 00:21:56.286 }, 00:21:56.286 "peer_address": { 00:21:56.286 "trtype": "RDMA", 00:21:56.286 "adrfam": "IPv4", 00:21:56.286 "traddr": "192.168.100.8", 00:21:56.286 "trsvcid": "36997" 00:21:56.286 }, 00:21:56.286 "auth": { 00:21:56.286 "state": "completed", 00:21:56.286 "digest": "sha256", 00:21:56.286 "dhgroup": "ffdhe3072" 00:21:56.286 } 00:21:56.286 } 00:21:56.286 ]' 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.286 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.544 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:21:56.544 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:21:57.111 10:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.369 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:57.369 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.369 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.369 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.369 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.369 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.369 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:57.369 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.628 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.885 00:21:57.885 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.885 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.885 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.885 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.885 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.885 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.885 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.143 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.143 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.143 { 00:21:58.143 "cntlid": 25, 00:21:58.143 "qid": 0, 00:21:58.143 "state": "enabled", 00:21:58.143 "thread": "nvmf_tgt_poll_group_000", 00:21:58.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:58.143 "listen_address": { 00:21:58.143 "trtype": "RDMA", 00:21:58.143 "adrfam": "IPv4", 00:21:58.143 "traddr": "192.168.100.8", 00:21:58.143 "trsvcid": "4420" 00:21:58.143 }, 00:21:58.143 "peer_address": { 00:21:58.143 "trtype": "RDMA", 00:21:58.143 "adrfam": "IPv4", 00:21:58.143 "traddr": "192.168.100.8", 00:21:58.143 "trsvcid": "49198" 00:21:58.143 }, 00:21:58.143 "auth": { 00:21:58.143 "state": "completed", 00:21:58.143 "digest": "sha256", 00:21:58.143 "dhgroup": "ffdhe4096" 00:21:58.143 } 00:21:58.143 } 00:21:58.143 ]' 00:21:58.143 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.143 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:58.143 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.143 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.143 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.143 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.143 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.143 10:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.401 10:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:21:58.401 10:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:21:58.968 10:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.968 10:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:58.968 10:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.968 10:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.227 10:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.227 10:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.227 10:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:59.227 10:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.227 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.484 00:21:59.484 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.484 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.484 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.742 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.742 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.742 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.742 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.742 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.742 { 00:21:59.742 "cntlid": 27, 00:21:59.742 "qid": 0, 00:21:59.742 "state": "enabled", 00:21:59.742 "thread": "nvmf_tgt_poll_group_000", 00:21:59.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:59.742 "listen_address": { 00:21:59.742 "trtype": "RDMA", 00:21:59.742 "adrfam": "IPv4", 00:21:59.742 "traddr": "192.168.100.8", 00:21:59.742 "trsvcid": "4420" 00:21:59.742 }, 00:21:59.742 "peer_address": { 00:21:59.742 "trtype": "RDMA", 00:21:59.742 "adrfam": "IPv4", 00:21:59.742 "traddr": "192.168.100.8", 00:21:59.742 "trsvcid": "33851" 00:21:59.742 }, 00:21:59.742 "auth": { 00:21:59.742 "state": "completed", 00:21:59.742 "digest": "sha256", 00:21:59.742 "dhgroup": "ffdhe4096" 00:21:59.742 } 00:21:59.742 } 00:21:59.742 ]' 00:21:59.742 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.742 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.742 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.000 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:00.000 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.000 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.000 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.000 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.259 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:00.259 10:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:00.829 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.829 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:00.829 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.829 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.829 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.829 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.829 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.829 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.088 10:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.347 00:22:01.347 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.347 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.347 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.606 { 00:22:01.606 "cntlid": 29, 00:22:01.606 "qid": 0, 00:22:01.606 "state": "enabled", 00:22:01.606 "thread": "nvmf_tgt_poll_group_000", 00:22:01.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:01.606 "listen_address": { 00:22:01.606 "trtype": "RDMA", 00:22:01.606 "adrfam": "IPv4", 00:22:01.606 "traddr": "192.168.100.8", 00:22:01.606 "trsvcid": "4420" 00:22:01.606 }, 00:22:01.606 "peer_address": { 00:22:01.606 "trtype": "RDMA", 00:22:01.606 "adrfam": "IPv4", 00:22:01.606 "traddr": "192.168.100.8", 00:22:01.606 "trsvcid": "46138" 00:22:01.606 }, 00:22:01.606 "auth": { 00:22:01.606 "state": "completed", 00:22:01.606 "digest": "sha256", 00:22:01.606 "dhgroup": "ffdhe4096" 00:22:01.606 } 00:22:01.606 } 00:22:01.606 ]' 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.606 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.865 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:01.865 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:02.432 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.692 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:02.692 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.692 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.692 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.692 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.692 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:02.692 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.951 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.210 00:22:03.210 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.210 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.210 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.210 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.210 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.210 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.469 { 00:22:03.469 "cntlid": 31, 00:22:03.469 "qid": 0, 00:22:03.469 "state": "enabled", 00:22:03.469 "thread": "nvmf_tgt_poll_group_000", 00:22:03.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:03.469 "listen_address": { 00:22:03.469 "trtype": "RDMA", 00:22:03.469 "adrfam": "IPv4", 00:22:03.469 "traddr": "192.168.100.8", 00:22:03.469 "trsvcid": "4420" 00:22:03.469 }, 00:22:03.469 "peer_address": { 00:22:03.469 "trtype": "RDMA", 00:22:03.469 "adrfam": "IPv4", 00:22:03.469 "traddr": "192.168.100.8", 00:22:03.469 "trsvcid": "39758" 00:22:03.469 }, 00:22:03.469 "auth": { 00:22:03.469 "state": "completed", 00:22:03.469 "digest": "sha256", 00:22:03.469 "dhgroup": "ffdhe4096" 00:22:03.469 } 00:22:03.469 } 00:22:03.469 ]' 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.469 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.728 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:03.728 10:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:04.296 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.296 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:04.296 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.296 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.296 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.296 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.296 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.296 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:04.296 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.556 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.125 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.125 { 00:22:05.125 "cntlid": 33, 00:22:05.125 "qid": 0, 00:22:05.125 "state": "enabled", 00:22:05.125 "thread": "nvmf_tgt_poll_group_000", 00:22:05.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:05.125 "listen_address": { 00:22:05.125 "trtype": "RDMA", 00:22:05.125 "adrfam": "IPv4", 00:22:05.125 "traddr": "192.168.100.8", 00:22:05.125 "trsvcid": "4420" 00:22:05.125 }, 00:22:05.125 "peer_address": { 00:22:05.125 "trtype": "RDMA", 00:22:05.125 "adrfam": "IPv4", 00:22:05.125 "traddr": "192.168.100.8", 00:22:05.125 "trsvcid": "57450" 00:22:05.125 }, 00:22:05.125 "auth": { 00:22:05.125 "state": "completed", 00:22:05.125 "digest": "sha256", 00:22:05.125 "dhgroup": "ffdhe6144" 00:22:05.125 } 00:22:05.125 } 00:22:05.125 ]' 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.125 10:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.125 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.385 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.385 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.385 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.385 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:05.385 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:06.326 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.326 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:06.326 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.326 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.326 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.326 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.326 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:06.326 10:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.326 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.897 00:22:06.897 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.897 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.897 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.897 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.897 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.897 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.897 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.897 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.897 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.897 { 00:22:06.897 "cntlid": 35, 00:22:06.897 "qid": 0, 00:22:06.897 "state": "enabled", 00:22:06.897 "thread": "nvmf_tgt_poll_group_000", 00:22:06.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:06.897 "listen_address": { 00:22:06.897 "trtype": "RDMA", 00:22:06.897 "adrfam": "IPv4", 00:22:06.897 "traddr": "192.168.100.8", 00:22:06.897 "trsvcid": "4420" 00:22:06.897 }, 00:22:06.897 "peer_address": { 00:22:06.897 "trtype": "RDMA", 00:22:06.897 "adrfam": "IPv4", 00:22:06.897 "traddr": "192.168.100.8", 00:22:06.897 "trsvcid": "59029" 00:22:06.897 }, 00:22:06.897 "auth": { 00:22:06.897 "state": "completed", 00:22:06.897 "digest": "sha256", 00:22:06.897 "dhgroup": "ffdhe6144" 00:22:06.897 } 00:22:06.897 } 00:22:06.897 ]' 00:22:06.897 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.156 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:07.156 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.156 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.156 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.156 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.156 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.156 10:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.416 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:07.416 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:07.982 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.982 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:07.982 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.982 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.982 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.983 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.983 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:07.983 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.242 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.502 00:22:08.761 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.761 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.761 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.761 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.761 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.761 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.761 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.761 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.761 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.761 { 00:22:08.761 "cntlid": 37, 00:22:08.761 "qid": 0, 00:22:08.761 "state": "enabled", 00:22:08.761 "thread": "nvmf_tgt_poll_group_000", 00:22:08.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:08.761 "listen_address": { 00:22:08.761 "trtype": "RDMA", 00:22:08.761 "adrfam": "IPv4", 00:22:08.761 "traddr": "192.168.100.8", 00:22:08.761 "trsvcid": "4420" 00:22:08.761 }, 00:22:08.761 "peer_address": { 00:22:08.761 "trtype": "RDMA", 00:22:08.761 "adrfam": "IPv4", 00:22:08.761 "traddr": "192.168.100.8", 00:22:08.761 "trsvcid": "45849" 00:22:08.761 }, 00:22:08.761 "auth": { 00:22:08.761 "state": "completed", 00:22:08.761 "digest": "sha256", 00:22:08.761 "dhgroup": "ffdhe6144" 00:22:08.761 } 00:22:08.761 } 00:22:08.761 ]' 00:22:08.761 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.021 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:09.021 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.021 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.021 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.021 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.021 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.021 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.279 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:09.279 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:09.847 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.847 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:09.847 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.847 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.847 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.847 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.847 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:09.847 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.107 10:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.366 00:22:10.366 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.366 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.366 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.625 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.625 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.625 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.625 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.625 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.625 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.625 { 00:22:10.625 "cntlid": 39, 00:22:10.625 "qid": 0, 00:22:10.625 "state": "enabled", 00:22:10.625 "thread": "nvmf_tgt_poll_group_000", 00:22:10.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:10.625 "listen_address": { 00:22:10.625 "trtype": "RDMA", 00:22:10.625 "adrfam": "IPv4", 00:22:10.625 "traddr": "192.168.100.8", 00:22:10.625 "trsvcid": "4420" 00:22:10.625 }, 00:22:10.625 "peer_address": { 00:22:10.625 "trtype": "RDMA", 00:22:10.625 "adrfam": "IPv4", 00:22:10.625 "traddr": "192.168.100.8", 00:22:10.625 "trsvcid": "42836" 00:22:10.625 }, 00:22:10.625 "auth": { 00:22:10.625 "state": "completed", 00:22:10.625 "digest": "sha256", 00:22:10.625 "dhgroup": "ffdhe6144" 00:22:10.625 } 00:22:10.625 } 00:22:10.625 ]' 00:22:10.625 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.625 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:10.625 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.885 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.885 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.885 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.885 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.885 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.146 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:11.146 10:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:11.714 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.714 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:11.714 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.714 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.714 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.714 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.714 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.714 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:11.714 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.972 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.541 00:22:12.541 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.541 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.541 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.541 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.541 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.541 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.541 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.541 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.541 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.541 { 00:22:12.541 "cntlid": 41, 00:22:12.541 "qid": 0, 00:22:12.541 "state": "enabled", 00:22:12.541 "thread": "nvmf_tgt_poll_group_000", 00:22:12.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:12.541 "listen_address": { 00:22:12.541 "trtype": "RDMA", 00:22:12.541 "adrfam": "IPv4", 00:22:12.541 "traddr": "192.168.100.8", 00:22:12.541 "trsvcid": "4420" 00:22:12.541 }, 00:22:12.541 "peer_address": { 00:22:12.541 "trtype": "RDMA", 00:22:12.541 "adrfam": "IPv4", 00:22:12.541 "traddr": "192.168.100.8", 00:22:12.541 "trsvcid": "45246" 00:22:12.541 }, 00:22:12.541 "auth": { 00:22:12.541 "state": "completed", 00:22:12.541 "digest": "sha256", 00:22:12.541 "dhgroup": "ffdhe8192" 00:22:12.541 } 00:22:12.541 } 00:22:12.541 ]' 00:22:12.541 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.800 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:12.800 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.800 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.800 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.800 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.800 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.800 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.059 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:13.059 10:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:13.628 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.628 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:13.628 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.628 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.628 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.628 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.628 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:13.628 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.887 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.454 00:22:14.454 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.454 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.454 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.713 { 00:22:14.713 "cntlid": 43, 00:22:14.713 "qid": 0, 00:22:14.713 "state": "enabled", 00:22:14.713 "thread": "nvmf_tgt_poll_group_000", 00:22:14.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:14.713 "listen_address": { 00:22:14.713 "trtype": "RDMA", 00:22:14.713 "adrfam": "IPv4", 00:22:14.713 "traddr": "192.168.100.8", 00:22:14.713 "trsvcid": "4420" 00:22:14.713 }, 00:22:14.713 "peer_address": { 00:22:14.713 "trtype": "RDMA", 00:22:14.713 "adrfam": "IPv4", 00:22:14.713 "traddr": "192.168.100.8", 00:22:14.713 "trsvcid": "59229" 00:22:14.713 }, 00:22:14.713 "auth": { 00:22:14.713 "state": "completed", 00:22:14.713 "digest": "sha256", 00:22:14.713 "dhgroup": "ffdhe8192" 00:22:14.713 } 00:22:14.713 } 00:22:14.713 ]' 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.713 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.971 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:14.971 10:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:15.539 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.798 10:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.366 00:22:16.366 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.366 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.366 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.625 { 00:22:16.625 "cntlid": 45, 00:22:16.625 "qid": 0, 00:22:16.625 "state": "enabled", 00:22:16.625 "thread": "nvmf_tgt_poll_group_000", 00:22:16.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:16.625 "listen_address": { 00:22:16.625 "trtype": "RDMA", 00:22:16.625 "adrfam": "IPv4", 00:22:16.625 "traddr": "192.168.100.8", 00:22:16.625 "trsvcid": "4420" 00:22:16.625 }, 00:22:16.625 "peer_address": { 00:22:16.625 "trtype": "RDMA", 00:22:16.625 "adrfam": "IPv4", 00:22:16.625 "traddr": "192.168.100.8", 00:22:16.625 "trsvcid": "53469" 00:22:16.625 }, 00:22:16.625 "auth": { 00:22:16.625 "state": "completed", 00:22:16.625 "digest": "sha256", 00:22:16.625 "dhgroup": "ffdhe8192" 00:22:16.625 } 00:22:16.625 } 00:22:16.625 ]' 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.625 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.884 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:16.884 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:17.452 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.712 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.972 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.972 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:17.972 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.972 10:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.232 00:22:18.232 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.232 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.232 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.491 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.491 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.491 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.491 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.491 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.491 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.491 { 00:22:18.492 "cntlid": 47, 00:22:18.492 "qid": 0, 00:22:18.492 "state": "enabled", 00:22:18.492 "thread": "nvmf_tgt_poll_group_000", 00:22:18.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:18.492 "listen_address": { 00:22:18.492 "trtype": "RDMA", 00:22:18.492 "adrfam": "IPv4", 00:22:18.492 "traddr": "192.168.100.8", 00:22:18.492 "trsvcid": "4420" 00:22:18.492 }, 00:22:18.492 "peer_address": { 00:22:18.492 "trtype": "RDMA", 00:22:18.492 "adrfam": "IPv4", 00:22:18.492 "traddr": "192.168.100.8", 00:22:18.492 "trsvcid": "36937" 00:22:18.492 }, 00:22:18.492 "auth": { 00:22:18.492 "state": "completed", 00:22:18.492 "digest": "sha256", 00:22:18.492 "dhgroup": "ffdhe8192" 00:22:18.492 } 00:22:18.492 } 00:22:18.492 ]' 00:22:18.492 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.492 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:18.492 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.492 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.492 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.751 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.751 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.751 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.751 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:18.751 10:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.688 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.947 00:22:19.947 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.947 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.947 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.205 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.205 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.205 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.205 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.205 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.205 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.205 { 00:22:20.205 "cntlid": 49, 00:22:20.205 "qid": 0, 00:22:20.205 "state": "enabled", 00:22:20.205 "thread": "nvmf_tgt_poll_group_000", 00:22:20.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:20.205 "listen_address": { 00:22:20.205 "trtype": "RDMA", 00:22:20.205 "adrfam": "IPv4", 00:22:20.205 "traddr": "192.168.100.8", 00:22:20.205 "trsvcid": "4420" 00:22:20.205 }, 00:22:20.205 "peer_address": { 00:22:20.205 "trtype": "RDMA", 00:22:20.205 "adrfam": "IPv4", 00:22:20.205 "traddr": "192.168.100.8", 00:22:20.205 "trsvcid": "37700" 00:22:20.205 }, 00:22:20.205 "auth": { 00:22:20.205 "state": "completed", 00:22:20.205 "digest": "sha384", 00:22:20.205 "dhgroup": "null" 00:22:20.205 } 00:22:20.205 } 00:22:20.205 ]' 00:22:20.205 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.205 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:20.205 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.464 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:20.464 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.464 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.465 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.465 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.724 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:20.724 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:21.292 10:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.292 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:21.292 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.292 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.292 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.292 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.292 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:21.292 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.551 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.811 00:22:21.811 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.811 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.811 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.071 { 00:22:22.071 "cntlid": 51, 00:22:22.071 "qid": 0, 00:22:22.071 "state": "enabled", 00:22:22.071 "thread": "nvmf_tgt_poll_group_000", 00:22:22.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:22.071 "listen_address": { 00:22:22.071 "trtype": "RDMA", 00:22:22.071 "adrfam": "IPv4", 00:22:22.071 "traddr": "192.168.100.8", 00:22:22.071 "trsvcid": "4420" 00:22:22.071 }, 00:22:22.071 "peer_address": { 00:22:22.071 "trtype": "RDMA", 00:22:22.071 "adrfam": "IPv4", 00:22:22.071 "traddr": "192.168.100.8", 00:22:22.071 "trsvcid": "37098" 00:22:22.071 }, 00:22:22.071 "auth": { 00:22:22.071 "state": "completed", 00:22:22.071 "digest": "sha384", 00:22:22.071 "dhgroup": "null" 00:22:22.071 } 00:22:22.071 } 00:22:22.071 ]' 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.071 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.331 10:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:22.331 10:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:22.899 10:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.158 10:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:23.158 10:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.158 10:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.158 10:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.158 10:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.158 10:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:23.158 10:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.417 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.677 00:22:23.677 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.677 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.677 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.677 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.677 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.677 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.677 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.677 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.677 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.677 { 00:22:23.677 "cntlid": 53, 00:22:23.677 "qid": 0, 00:22:23.677 "state": "enabled", 00:22:23.677 "thread": "nvmf_tgt_poll_group_000", 00:22:23.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:23.677 "listen_address": { 00:22:23.677 "trtype": "RDMA", 00:22:23.677 "adrfam": "IPv4", 00:22:23.677 "traddr": "192.168.100.8", 00:22:23.677 "trsvcid": "4420" 00:22:23.677 }, 00:22:23.677 "peer_address": { 00:22:23.677 "trtype": "RDMA", 00:22:23.677 "adrfam": "IPv4", 00:22:23.677 "traddr": "192.168.100.8", 00:22:23.677 "trsvcid": "54010" 00:22:23.677 }, 00:22:23.677 "auth": { 00:22:23.677 "state": "completed", 00:22:23.677 "digest": "sha384", 00:22:23.677 "dhgroup": "null" 00:22:23.677 } 00:22:23.677 } 00:22:23.677 ]' 00:22:23.677 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.936 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.936 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.936 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:23.936 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.936 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.936 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.936 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.195 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:24.195 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:24.762 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.762 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:24.762 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.762 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.762 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.762 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.762 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:24.762 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.021 10:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.280 00:22:25.280 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.280 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.280 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.542 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.542 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.542 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.542 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.542 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.542 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.542 { 00:22:25.542 "cntlid": 55, 00:22:25.542 "qid": 0, 00:22:25.542 "state": "enabled", 00:22:25.542 "thread": "nvmf_tgt_poll_group_000", 00:22:25.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:25.542 "listen_address": { 00:22:25.542 "trtype": "RDMA", 00:22:25.542 "adrfam": "IPv4", 00:22:25.542 "traddr": "192.168.100.8", 00:22:25.542 "trsvcid": "4420" 00:22:25.542 }, 00:22:25.543 "peer_address": { 00:22:25.543 "trtype": "RDMA", 00:22:25.543 "adrfam": "IPv4", 00:22:25.543 "traddr": "192.168.100.8", 00:22:25.543 "trsvcid": "42648" 00:22:25.543 }, 00:22:25.543 "auth": { 00:22:25.543 "state": "completed", 00:22:25.543 "digest": "sha384", 00:22:25.543 "dhgroup": "null" 00:22:25.543 } 00:22:25.543 } 00:22:25.543 ]' 00:22:25.543 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.543 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:25.543 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.543 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:25.543 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.543 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.543 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.543 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.898 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:25.898 10:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:26.525 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.525 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:26.525 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.525 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.525 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.525 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.525 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.525 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:26.525 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.796 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.055 00:22:27.055 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.055 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.055 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.315 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.315 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.315 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.315 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.315 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.315 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.315 { 00:22:27.315 "cntlid": 57, 00:22:27.315 "qid": 0, 00:22:27.315 "state": "enabled", 00:22:27.315 "thread": "nvmf_tgt_poll_group_000", 00:22:27.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:27.315 "listen_address": { 00:22:27.315 "trtype": "RDMA", 00:22:27.315 "adrfam": "IPv4", 00:22:27.315 "traddr": "192.168.100.8", 00:22:27.315 "trsvcid": "4420" 00:22:27.315 }, 00:22:27.315 "peer_address": { 00:22:27.315 "trtype": "RDMA", 00:22:27.315 "adrfam": "IPv4", 00:22:27.315 "traddr": "192.168.100.8", 00:22:27.315 "trsvcid": "59347" 00:22:27.315 }, 00:22:27.315 "auth": { 00:22:27.315 "state": "completed", 00:22:27.315 "digest": "sha384", 00:22:27.315 "dhgroup": "ffdhe2048" 00:22:27.315 } 00:22:27.315 } 00:22:27.315 ]' 00:22:27.315 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.315 10:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:27.315 10:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.315 10:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:27.315 10:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.315 10:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.315 10:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.315 10:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.573 10:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:27.573 10:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:28.141 10:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.400 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:28.400 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.400 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.400 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.400 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.400 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:28.400 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.659 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.918 00:22:28.918 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.918 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.918 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.918 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.918 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.918 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.918 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.918 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.918 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.918 { 00:22:28.918 "cntlid": 59, 00:22:28.918 "qid": 0, 00:22:28.918 "state": "enabled", 00:22:28.918 "thread": "nvmf_tgt_poll_group_000", 00:22:28.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:28.918 "listen_address": { 00:22:28.918 "trtype": "RDMA", 00:22:28.918 "adrfam": "IPv4", 00:22:28.918 "traddr": "192.168.100.8", 00:22:28.918 "trsvcid": "4420" 00:22:28.918 }, 00:22:28.918 "peer_address": { 00:22:28.918 "trtype": "RDMA", 00:22:28.918 "adrfam": "IPv4", 00:22:28.918 "traddr": "192.168.100.8", 00:22:28.918 "trsvcid": "38772" 00:22:28.918 }, 00:22:28.918 "auth": { 00:22:28.918 "state": "completed", 00:22:28.918 "digest": "sha384", 00:22:28.918 "dhgroup": "ffdhe2048" 00:22:28.918 } 00:22:28.918 } 00:22:28.918 ]' 00:22:28.918 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.176 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.176 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.177 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:29.177 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.177 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.177 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.177 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.435 10:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:29.435 10:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:30.002 10:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.003 10:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:30.003 10:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.003 10:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.003 10:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.003 10:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.003 10:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:30.003 10:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.262 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.520 00:22:30.520 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.520 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.520 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.778 { 00:22:30.778 "cntlid": 61, 00:22:30.778 "qid": 0, 00:22:30.778 "state": "enabled", 00:22:30.778 "thread": "nvmf_tgt_poll_group_000", 00:22:30.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:30.778 "listen_address": { 00:22:30.778 "trtype": "RDMA", 00:22:30.778 "adrfam": "IPv4", 00:22:30.778 "traddr": "192.168.100.8", 00:22:30.778 "trsvcid": "4420" 00:22:30.778 }, 00:22:30.778 "peer_address": { 00:22:30.778 "trtype": "RDMA", 00:22:30.778 "adrfam": "IPv4", 00:22:30.778 "traddr": "192.168.100.8", 00:22:30.778 "trsvcid": "34925" 00:22:30.778 }, 00:22:30.778 "auth": { 00:22:30.778 "state": "completed", 00:22:30.778 "digest": "sha384", 00:22:30.778 "dhgroup": "ffdhe2048" 00:22:30.778 } 00:22:30.778 } 00:22:30.778 ]' 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.778 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.037 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:31.037 10:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:31.606 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.864 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:31.864 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.864 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.864 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.864 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.864 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:31.864 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:32.122 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:32.122 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.122 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:32.123 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:32.123 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:32.123 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.123 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:32.123 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.123 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.123 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.123 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:32.123 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.123 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.382 00:22:32.382 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.382 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.382 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.382 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.382 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.382 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.382 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.382 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.382 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.382 { 00:22:32.382 "cntlid": 63, 00:22:32.382 "qid": 0, 00:22:32.382 "state": "enabled", 00:22:32.382 "thread": "nvmf_tgt_poll_group_000", 00:22:32.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:32.382 "listen_address": { 00:22:32.382 "trtype": "RDMA", 00:22:32.382 "adrfam": "IPv4", 00:22:32.382 "traddr": "192.168.100.8", 00:22:32.382 "trsvcid": "4420" 00:22:32.382 }, 00:22:32.382 "peer_address": { 00:22:32.382 "trtype": "RDMA", 00:22:32.382 "adrfam": "IPv4", 00:22:32.382 "traddr": "192.168.100.8", 00:22:32.382 "trsvcid": "33153" 00:22:32.382 }, 00:22:32.382 "auth": { 00:22:32.382 "state": "completed", 00:22:32.382 "digest": "sha384", 00:22:32.382 "dhgroup": "ffdhe2048" 00:22:32.382 } 00:22:32.382 } 00:22:32.382 ]' 00:22:32.382 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.641 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.641 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.641 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:32.641 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.641 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.641 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.641 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.899 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:32.900 10:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:33.468 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.468 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:33.468 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.468 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.468 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.468 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.468 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.468 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.468 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.727 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.985 00:22:33.985 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.985 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.985 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.244 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.244 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.244 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.244 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.244 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.244 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.244 { 00:22:34.244 "cntlid": 65, 00:22:34.244 "qid": 0, 00:22:34.244 "state": "enabled", 00:22:34.244 "thread": "nvmf_tgt_poll_group_000", 00:22:34.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:34.244 "listen_address": { 00:22:34.244 "trtype": "RDMA", 00:22:34.244 "adrfam": "IPv4", 00:22:34.244 "traddr": "192.168.100.8", 00:22:34.244 "trsvcid": "4420" 00:22:34.244 }, 00:22:34.244 "peer_address": { 00:22:34.244 "trtype": "RDMA", 00:22:34.244 "adrfam": "IPv4", 00:22:34.244 "traddr": "192.168.100.8", 00:22:34.244 "trsvcid": "33749" 00:22:34.244 }, 00:22:34.244 "auth": { 00:22:34.244 "state": "completed", 00:22:34.244 "digest": "sha384", 00:22:34.244 "dhgroup": "ffdhe3072" 00:22:34.244 } 00:22:34.244 } 00:22:34.244 ]' 00:22:34.244 10:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.244 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.244 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.244 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:34.244 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.244 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.244 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.244 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.503 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:34.503 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:35.071 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.329 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:35.329 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.329 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.329 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.329 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.329 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:35.329 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.588 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.846 00:22:35.846 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.846 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.846 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.846 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.846 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.846 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.846 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.105 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.106 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.106 { 00:22:36.106 "cntlid": 67, 00:22:36.106 "qid": 0, 00:22:36.106 "state": "enabled", 00:22:36.106 "thread": "nvmf_tgt_poll_group_000", 00:22:36.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:36.106 "listen_address": { 00:22:36.106 "trtype": "RDMA", 00:22:36.106 "adrfam": "IPv4", 00:22:36.106 "traddr": "192.168.100.8", 00:22:36.106 "trsvcid": "4420" 00:22:36.106 }, 00:22:36.106 "peer_address": { 00:22:36.106 "trtype": "RDMA", 00:22:36.106 "adrfam": "IPv4", 00:22:36.106 "traddr": "192.168.100.8", 00:22:36.106 "trsvcid": "45758" 00:22:36.106 }, 00:22:36.106 "auth": { 00:22:36.106 "state": "completed", 00:22:36.106 "digest": "sha384", 00:22:36.106 "dhgroup": "ffdhe3072" 00:22:36.106 } 00:22:36.106 } 00:22:36.106 ]' 00:22:36.106 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.106 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:36.106 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.106 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:36.106 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.106 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.106 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.106 10:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.363 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:36.363 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:36.931 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.931 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:36.931 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.931 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.931 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.931 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.931 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:36.931 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:37.191 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:37.191 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.191 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:37.191 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:37.191 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:37.191 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.191 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.191 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.191 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.191 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.191 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.191 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.191 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.451 00:22:37.451 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.451 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.451 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.710 { 00:22:37.710 "cntlid": 69, 00:22:37.710 "qid": 0, 00:22:37.710 "state": "enabled", 00:22:37.710 "thread": "nvmf_tgt_poll_group_000", 00:22:37.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:37.710 "listen_address": { 00:22:37.710 "trtype": "RDMA", 00:22:37.710 "adrfam": "IPv4", 00:22:37.710 "traddr": "192.168.100.8", 00:22:37.710 "trsvcid": "4420" 00:22:37.710 }, 00:22:37.710 "peer_address": { 00:22:37.710 "trtype": "RDMA", 00:22:37.710 "adrfam": "IPv4", 00:22:37.710 "traddr": "192.168.100.8", 00:22:37.710 "trsvcid": "50921" 00:22:37.710 }, 00:22:37.710 "auth": { 00:22:37.710 "state": "completed", 00:22:37.710 "digest": "sha384", 00:22:37.710 "dhgroup": "ffdhe3072" 00:22:37.710 } 00:22:37.710 } 00:22:37.710 ]' 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:37.710 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.969 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.969 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.969 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.969 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:37.969 10:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:38.905 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.905 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:38.905 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.905 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.905 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.905 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.905 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:38.906 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.164 00:22:39.164 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.164 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.164 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.423 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.424 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.424 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.424 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.424 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.424 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.424 { 00:22:39.424 "cntlid": 71, 00:22:39.424 "qid": 0, 00:22:39.424 "state": "enabled", 00:22:39.424 "thread": "nvmf_tgt_poll_group_000", 00:22:39.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:39.424 "listen_address": { 00:22:39.424 "trtype": "RDMA", 00:22:39.424 "adrfam": "IPv4", 00:22:39.424 "traddr": "192.168.100.8", 00:22:39.424 "trsvcid": "4420" 00:22:39.424 }, 00:22:39.424 "peer_address": { 00:22:39.424 "trtype": "RDMA", 00:22:39.424 "adrfam": "IPv4", 00:22:39.424 "traddr": "192.168.100.8", 00:22:39.424 "trsvcid": "51398" 00:22:39.424 }, 00:22:39.424 "auth": { 00:22:39.424 "state": "completed", 00:22:39.424 "digest": "sha384", 00:22:39.424 "dhgroup": "ffdhe3072" 00:22:39.424 } 00:22:39.424 } 00:22:39.424 ]' 00:22:39.424 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.424 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:39.424 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.683 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:39.683 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.683 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.683 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.683 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.942 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:39.942 10:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:40.511 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.511 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:40.511 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.511 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.511 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.511 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.511 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.511 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:40.511 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.771 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.030 00:22:41.030 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.030 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.030 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.288 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.288 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.288 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.288 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.288 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.288 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.288 { 00:22:41.288 "cntlid": 73, 00:22:41.288 "qid": 0, 00:22:41.288 "state": "enabled", 00:22:41.288 "thread": "nvmf_tgt_poll_group_000", 00:22:41.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:41.288 "listen_address": { 00:22:41.288 "trtype": "RDMA", 00:22:41.288 "adrfam": "IPv4", 00:22:41.288 "traddr": "192.168.100.8", 00:22:41.288 "trsvcid": "4420" 00:22:41.288 }, 00:22:41.288 "peer_address": { 00:22:41.288 "trtype": "RDMA", 00:22:41.288 "adrfam": "IPv4", 00:22:41.288 "traddr": "192.168.100.8", 00:22:41.288 "trsvcid": "47552" 00:22:41.288 }, 00:22:41.288 "auth": { 00:22:41.288 "state": "completed", 00:22:41.288 "digest": "sha384", 00:22:41.288 "dhgroup": "ffdhe4096" 00:22:41.288 } 00:22:41.288 } 00:22:41.288 ]' 00:22:41.288 10:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.288 10:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:41.289 10:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.289 10:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:41.289 10:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.289 10:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.289 10:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.289 10:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.548 10:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:41.548 10:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:42.116 10:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.374 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:42.374 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.374 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.375 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.634 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.894 { 00:22:42.894 "cntlid": 75, 00:22:42.894 "qid": 0, 00:22:42.894 "state": "enabled", 00:22:42.894 "thread": "nvmf_tgt_poll_group_000", 00:22:42.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:42.894 "listen_address": { 00:22:42.894 "trtype": "RDMA", 00:22:42.894 "adrfam": "IPv4", 00:22:42.894 "traddr": "192.168.100.8", 00:22:42.894 "trsvcid": "4420" 00:22:42.894 }, 00:22:42.894 "peer_address": { 00:22:42.894 "trtype": "RDMA", 00:22:42.894 "adrfam": "IPv4", 00:22:42.894 "traddr": "192.168.100.8", 00:22:42.894 "trsvcid": "43650" 00:22:42.894 }, 00:22:42.894 "auth": { 00:22:42.894 "state": "completed", 00:22:42.894 "digest": "sha384", 00:22:42.894 "dhgroup": "ffdhe4096" 00:22:42.894 } 00:22:42.894 } 00:22:42.894 ]' 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:42.894 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.155 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:43.155 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.155 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.155 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.155 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.415 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:43.415 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:43.985 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.985 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:43.985 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.985 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.985 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.985 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.985 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:43.985 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.244 10:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.504 00:22:44.504 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.504 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.504 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.763 { 00:22:44.763 "cntlid": 77, 00:22:44.763 "qid": 0, 00:22:44.763 "state": "enabled", 00:22:44.763 "thread": "nvmf_tgt_poll_group_000", 00:22:44.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:44.763 "listen_address": { 00:22:44.763 "trtype": "RDMA", 00:22:44.763 "adrfam": "IPv4", 00:22:44.763 "traddr": "192.168.100.8", 00:22:44.763 "trsvcid": "4420" 00:22:44.763 }, 00:22:44.763 "peer_address": { 00:22:44.763 "trtype": "RDMA", 00:22:44.763 "adrfam": "IPv4", 00:22:44.763 "traddr": "192.168.100.8", 00:22:44.763 "trsvcid": "49540" 00:22:44.763 }, 00:22:44.763 "auth": { 00:22:44.763 "state": "completed", 00:22:44.763 "digest": "sha384", 00:22:44.763 "dhgroup": "ffdhe4096" 00:22:44.763 } 00:22:44.763 } 00:22:44.763 ]' 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.763 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.022 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:45.022 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:45.591 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.850 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:45.850 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.850 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.850 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.850 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.850 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:45.850 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.111 10:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.379 00:22:46.379 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.379 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.379 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.379 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.379 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.379 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.379 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.379 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.379 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.380 { 00:22:46.380 "cntlid": 79, 00:22:46.380 "qid": 0, 00:22:46.380 "state": "enabled", 00:22:46.380 "thread": "nvmf_tgt_poll_group_000", 00:22:46.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:46.380 "listen_address": { 00:22:46.380 "trtype": "RDMA", 00:22:46.380 "adrfam": "IPv4", 00:22:46.380 "traddr": "192.168.100.8", 00:22:46.380 "trsvcid": "4420" 00:22:46.380 }, 00:22:46.380 "peer_address": { 00:22:46.380 "trtype": "RDMA", 00:22:46.380 "adrfam": "IPv4", 00:22:46.380 "traddr": "192.168.100.8", 00:22:46.380 "trsvcid": "38232" 00:22:46.380 }, 00:22:46.380 "auth": { 00:22:46.380 "state": "completed", 00:22:46.380 "digest": "sha384", 00:22:46.380 "dhgroup": "ffdhe4096" 00:22:46.380 } 00:22:46.380 } 00:22:46.380 ]' 00:22:46.380 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.639 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:46.639 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.639 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:46.639 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.639 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.639 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.639 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.898 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:46.898 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:47.466 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.466 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:47.466 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.466 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.466 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.466 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.466 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.466 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:47.466 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.725 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.292 00:22:48.292 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.292 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.292 10:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.292 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.292 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.292 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.292 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.292 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.292 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.292 { 00:22:48.292 "cntlid": 81, 00:22:48.292 "qid": 0, 00:22:48.292 "state": "enabled", 00:22:48.292 "thread": "nvmf_tgt_poll_group_000", 00:22:48.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:48.292 "listen_address": { 00:22:48.292 "trtype": "RDMA", 00:22:48.292 "adrfam": "IPv4", 00:22:48.292 "traddr": "192.168.100.8", 00:22:48.292 "trsvcid": "4420" 00:22:48.292 }, 00:22:48.292 "peer_address": { 00:22:48.292 "trtype": "RDMA", 00:22:48.292 "adrfam": "IPv4", 00:22:48.292 "traddr": "192.168.100.8", 00:22:48.292 "trsvcid": "33255" 00:22:48.292 }, 00:22:48.292 "auth": { 00:22:48.292 "state": "completed", 00:22:48.292 "digest": "sha384", 00:22:48.292 "dhgroup": "ffdhe6144" 00:22:48.292 } 00:22:48.292 } 00:22:48.292 ]' 00:22:48.292 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.292 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:48.292 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.551 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:48.551 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.551 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.551 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.551 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.810 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:48.810 10:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:49.379 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.379 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:49.379 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.379 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.379 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.379 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.379 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:49.379 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.638 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.897 00:22:49.897 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.897 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.897 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:50.157 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.157 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.157 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.157 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.157 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.157 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.157 { 00:22:50.157 "cntlid": 83, 00:22:50.157 "qid": 0, 00:22:50.157 "state": "enabled", 00:22:50.157 "thread": "nvmf_tgt_poll_group_000", 00:22:50.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:50.157 "listen_address": { 00:22:50.157 "trtype": "RDMA", 00:22:50.157 "adrfam": "IPv4", 00:22:50.157 "traddr": "192.168.100.8", 00:22:50.157 "trsvcid": "4420" 00:22:50.157 }, 00:22:50.157 "peer_address": { 00:22:50.157 "trtype": "RDMA", 00:22:50.157 "adrfam": "IPv4", 00:22:50.157 "traddr": "192.168.100.8", 00:22:50.157 "trsvcid": "44826" 00:22:50.157 }, 00:22:50.157 "auth": { 00:22:50.157 "state": "completed", 00:22:50.157 "digest": "sha384", 00:22:50.157 "dhgroup": "ffdhe6144" 00:22:50.157 } 00:22:50.157 } 00:22:50.157 ]' 00:22:50.157 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.157 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:50.157 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.157 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:50.157 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.416 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.416 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.416 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.416 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:50.416 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:50.984 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.243 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:51.243 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.243 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.243 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.243 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.243 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:51.243 10:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:51.502 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:51.502 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.502 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:51.503 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:51.503 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:51.503 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.503 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.503 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.503 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.503 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.503 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.503 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.503 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.762 00:22:51.762 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.762 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.762 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.021 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.021 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.021 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.021 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.021 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.021 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.021 { 00:22:52.021 "cntlid": 85, 00:22:52.021 "qid": 0, 00:22:52.021 "state": "enabled", 00:22:52.021 "thread": "nvmf_tgt_poll_group_000", 00:22:52.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:52.021 "listen_address": { 00:22:52.021 "trtype": "RDMA", 00:22:52.021 "adrfam": "IPv4", 00:22:52.021 "traddr": "192.168.100.8", 00:22:52.021 "trsvcid": "4420" 00:22:52.021 }, 00:22:52.021 "peer_address": { 00:22:52.021 "trtype": "RDMA", 00:22:52.021 "adrfam": "IPv4", 00:22:52.021 "traddr": "192.168.100.8", 00:22:52.021 "trsvcid": "56027" 00:22:52.021 }, 00:22:52.021 "auth": { 00:22:52.022 "state": "completed", 00:22:52.022 "digest": "sha384", 00:22:52.022 "dhgroup": "ffdhe6144" 00:22:52.022 } 00:22:52.022 } 00:22:52.022 ]' 00:22:52.022 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.022 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:52.022 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.022 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:52.022 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.022 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.022 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.022 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.280 10:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:52.280 10:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:22:52.847 10:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.106 10:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:53.106 10:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.106 10:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.106 10:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.106 10:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.106 10:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:53.106 10:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.365 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.624 00:22:53.624 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.624 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.624 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.883 { 00:22:53.883 "cntlid": 87, 00:22:53.883 "qid": 0, 00:22:53.883 "state": "enabled", 00:22:53.883 "thread": "nvmf_tgt_poll_group_000", 00:22:53.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:53.883 "listen_address": { 00:22:53.883 "trtype": "RDMA", 00:22:53.883 "adrfam": "IPv4", 00:22:53.883 "traddr": "192.168.100.8", 00:22:53.883 "trsvcid": "4420" 00:22:53.883 }, 00:22:53.883 "peer_address": { 00:22:53.883 "trtype": "RDMA", 00:22:53.883 "adrfam": "IPv4", 00:22:53.883 "traddr": "192.168.100.8", 00:22:53.883 "trsvcid": "60131" 00:22:53.883 }, 00:22:53.883 "auth": { 00:22:53.883 "state": "completed", 00:22:53.883 "digest": "sha384", 00:22:53.883 "dhgroup": "ffdhe6144" 00:22:53.883 } 00:22:53.883 } 00:22:53.883 ]' 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.883 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.142 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:54.142 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:22:54.709 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.968 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:54.968 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.968 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.968 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.968 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.968 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.968 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:54.968 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.227 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.486 00:22:55.486 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.486 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.486 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.745 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.745 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.745 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.745 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.745 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.745 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.745 { 00:22:55.745 "cntlid": 89, 00:22:55.745 "qid": 0, 00:22:55.745 "state": "enabled", 00:22:55.745 "thread": "nvmf_tgt_poll_group_000", 00:22:55.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:55.745 "listen_address": { 00:22:55.745 "trtype": "RDMA", 00:22:55.745 "adrfam": "IPv4", 00:22:55.745 "traddr": "192.168.100.8", 00:22:55.745 "trsvcid": "4420" 00:22:55.745 }, 00:22:55.745 "peer_address": { 00:22:55.745 "trtype": "RDMA", 00:22:55.745 "adrfam": "IPv4", 00:22:55.745 "traddr": "192.168.100.8", 00:22:55.745 "trsvcid": "49806" 00:22:55.745 }, 00:22:55.745 "auth": { 00:22:55.745 "state": "completed", 00:22:55.745 "digest": "sha384", 00:22:55.745 "dhgroup": "ffdhe8192" 00:22:55.745 } 00:22:55.745 } 00:22:55.745 ]' 00:22:55.745 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.745 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:55.745 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.003 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:56.003 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.003 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.003 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.003 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.263 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:56.263 10:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:22:56.830 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.830 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:56.830 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.831 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.831 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.831 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.831 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:56.831 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.089 10:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.657 00:22:57.657 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.657 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.657 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.657 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.657 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.657 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.657 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.657 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.657 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.657 { 00:22:57.657 "cntlid": 91, 00:22:57.657 "qid": 0, 00:22:57.657 "state": "enabled", 00:22:57.657 "thread": "nvmf_tgt_poll_group_000", 00:22:57.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:57.657 "listen_address": { 00:22:57.657 "trtype": "RDMA", 00:22:57.657 "adrfam": "IPv4", 00:22:57.657 "traddr": "192.168.100.8", 00:22:57.657 "trsvcid": "4420" 00:22:57.657 }, 00:22:57.657 "peer_address": { 00:22:57.657 "trtype": "RDMA", 00:22:57.657 "adrfam": "IPv4", 00:22:57.657 "traddr": "192.168.100.8", 00:22:57.657 "trsvcid": "45546" 00:22:57.657 }, 00:22:57.657 "auth": { 00:22:57.657 "state": "completed", 00:22:57.657 "digest": "sha384", 00:22:57.657 "dhgroup": "ffdhe8192" 00:22:57.657 } 00:22:57.657 } 00:22:57.657 ]' 00:22:57.657 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.917 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:57.917 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.917 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:57.917 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.917 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.917 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.917 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.177 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:58.177 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:22:58.747 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.747 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:58.747 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.747 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.747 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.747 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.747 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:58.747 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.007 10:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.576 00:22:59.576 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.576 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.576 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.837 { 00:22:59.837 "cntlid": 93, 00:22:59.837 "qid": 0, 00:22:59.837 "state": "enabled", 00:22:59.837 "thread": "nvmf_tgt_poll_group_000", 00:22:59.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:59.837 "listen_address": { 00:22:59.837 "trtype": "RDMA", 00:22:59.837 "adrfam": "IPv4", 00:22:59.837 "traddr": "192.168.100.8", 00:22:59.837 "trsvcid": "4420" 00:22:59.837 }, 00:22:59.837 "peer_address": { 00:22:59.837 "trtype": "RDMA", 00:22:59.837 "adrfam": "IPv4", 00:22:59.837 "traddr": "192.168.100.8", 00:22:59.837 "trsvcid": "59178" 00:22:59.837 }, 00:22:59.837 "auth": { 00:22:59.837 "state": "completed", 00:22:59.837 "digest": "sha384", 00:22:59.837 "dhgroup": "ffdhe8192" 00:22:59.837 } 00:22:59.837 } 00:22:59.837 ]' 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.837 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.097 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:00.097 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:00.664 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.923 10:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:01.490 00:23:01.490 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.490 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.491 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.750 { 00:23:01.750 "cntlid": 95, 00:23:01.750 "qid": 0, 00:23:01.750 "state": "enabled", 00:23:01.750 "thread": "nvmf_tgt_poll_group_000", 00:23:01.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:01.750 "listen_address": { 00:23:01.750 "trtype": "RDMA", 00:23:01.750 "adrfam": "IPv4", 00:23:01.750 "traddr": "192.168.100.8", 00:23:01.750 "trsvcid": "4420" 00:23:01.750 }, 00:23:01.750 "peer_address": { 00:23:01.750 "trtype": "RDMA", 00:23:01.750 "adrfam": "IPv4", 00:23:01.750 "traddr": "192.168.100.8", 00:23:01.750 "trsvcid": "60413" 00:23:01.750 }, 00:23:01.750 "auth": { 00:23:01.750 "state": "completed", 00:23:01.750 "digest": "sha384", 00:23:01.750 "dhgroup": "ffdhe8192" 00:23:01.750 } 00:23:01.750 } 00:23:01.750 ]' 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.750 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.009 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:02.009 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:02.576 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.835 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:02.835 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.835 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.835 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.835 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:02.835 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.835 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.835 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:02.835 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:03.094 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:03.094 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.094 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:03.094 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:03.094 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:03.094 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.095 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.095 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.095 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.095 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.095 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.095 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.095 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.354 00:23:03.354 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:03.354 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.354 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.354 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.354 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.354 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.354 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.354 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.354 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.354 { 00:23:03.354 "cntlid": 97, 00:23:03.354 "qid": 0, 00:23:03.354 "state": "enabled", 00:23:03.354 "thread": "nvmf_tgt_poll_group_000", 00:23:03.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:03.354 "listen_address": { 00:23:03.354 "trtype": "RDMA", 00:23:03.354 "adrfam": "IPv4", 00:23:03.354 "traddr": "192.168.100.8", 00:23:03.354 "trsvcid": "4420" 00:23:03.354 }, 00:23:03.354 "peer_address": { 00:23:03.354 "trtype": "RDMA", 00:23:03.354 "adrfam": "IPv4", 00:23:03.354 "traddr": "192.168.100.8", 00:23:03.354 "trsvcid": "52986" 00:23:03.354 }, 00:23:03.354 "auth": { 00:23:03.354 "state": "completed", 00:23:03.354 "digest": "sha512", 00:23:03.354 "dhgroup": "null" 00:23:03.354 } 00:23:03.354 } 00:23:03.354 ]' 00:23:03.354 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.613 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.613 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.613 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:03.613 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.613 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.613 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.613 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.872 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:03.873 10:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:04.442 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.442 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:04.442 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.442 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.442 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.442 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.442 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:04.442 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.701 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.960 00:23:04.960 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.960 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.960 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.221 { 00:23:05.221 "cntlid": 99, 00:23:05.221 "qid": 0, 00:23:05.221 "state": "enabled", 00:23:05.221 "thread": "nvmf_tgt_poll_group_000", 00:23:05.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:05.221 "listen_address": { 00:23:05.221 "trtype": "RDMA", 00:23:05.221 "adrfam": "IPv4", 00:23:05.221 "traddr": "192.168.100.8", 00:23:05.221 "trsvcid": "4420" 00:23:05.221 }, 00:23:05.221 "peer_address": { 00:23:05.221 "trtype": "RDMA", 00:23:05.221 "adrfam": "IPv4", 00:23:05.221 "traddr": "192.168.100.8", 00:23:05.221 "trsvcid": "56743" 00:23:05.221 }, 00:23:05.221 "auth": { 00:23:05.221 "state": "completed", 00:23:05.221 "digest": "sha512", 00:23:05.221 "dhgroup": "null" 00:23:05.221 } 00:23:05.221 } 00:23:05.221 ]' 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:05.221 10:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.221 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.221 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.221 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.480 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:05.480 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:06.049 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.309 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:06.309 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.309 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.309 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.309 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:06.309 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:06.309 10:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.309 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.567 00:23:06.567 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:06.567 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.567 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:06.826 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.826 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.826 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.826 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.826 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.826 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.826 { 00:23:06.826 "cntlid": 101, 00:23:06.826 "qid": 0, 00:23:06.826 "state": "enabled", 00:23:06.826 "thread": "nvmf_tgt_poll_group_000", 00:23:06.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:06.826 "listen_address": { 00:23:06.826 "trtype": "RDMA", 00:23:06.826 "adrfam": "IPv4", 00:23:06.826 "traddr": "192.168.100.8", 00:23:06.826 "trsvcid": "4420" 00:23:06.826 }, 00:23:06.826 "peer_address": { 00:23:06.826 "trtype": "RDMA", 00:23:06.826 "adrfam": "IPv4", 00:23:06.826 "traddr": "192.168.100.8", 00:23:06.826 "trsvcid": "52618" 00:23:06.826 }, 00:23:06.826 "auth": { 00:23:06.826 "state": "completed", 00:23:06.826 "digest": "sha512", 00:23:06.826 "dhgroup": "null" 00:23:06.826 } 00:23:06.826 } 00:23:06.826 ]' 00:23:06.826 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.826 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.826 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.084 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:07.084 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:07.084 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.085 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.085 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.085 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:07.085 10:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:08.024 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:08.284 00:23:08.284 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:08.284 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:08.284 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.542 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.543 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.543 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.543 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.543 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.543 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.543 { 00:23:08.543 "cntlid": 103, 00:23:08.543 "qid": 0, 00:23:08.543 "state": "enabled", 00:23:08.543 "thread": "nvmf_tgt_poll_group_000", 00:23:08.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:08.543 "listen_address": { 00:23:08.543 "trtype": "RDMA", 00:23:08.543 "adrfam": "IPv4", 00:23:08.543 "traddr": "192.168.100.8", 00:23:08.543 "trsvcid": "4420" 00:23:08.543 }, 00:23:08.543 "peer_address": { 00:23:08.543 "trtype": "RDMA", 00:23:08.543 "adrfam": "IPv4", 00:23:08.543 "traddr": "192.168.100.8", 00:23:08.543 "trsvcid": "33802" 00:23:08.543 }, 00:23:08.543 "auth": { 00:23:08.543 "state": "completed", 00:23:08.543 "digest": "sha512", 00:23:08.543 "dhgroup": "null" 00:23:08.543 } 00:23:08.543 } 00:23:08.543 ]' 00:23:08.543 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.543 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:08.543 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.801 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:08.801 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.801 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.801 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.802 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.061 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:09.061 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:09.630 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.630 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:09.630 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.630 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.630 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.630 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.630 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:09.630 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.630 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.890 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.149 00:23:10.149 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.149 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.149 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.408 { 00:23:10.408 "cntlid": 105, 00:23:10.408 "qid": 0, 00:23:10.408 "state": "enabled", 00:23:10.408 "thread": "nvmf_tgt_poll_group_000", 00:23:10.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:10.408 "listen_address": { 00:23:10.408 "trtype": "RDMA", 00:23:10.408 "adrfam": "IPv4", 00:23:10.408 "traddr": "192.168.100.8", 00:23:10.408 "trsvcid": "4420" 00:23:10.408 }, 00:23:10.408 "peer_address": { 00:23:10.408 "trtype": "RDMA", 00:23:10.408 "adrfam": "IPv4", 00:23:10.408 "traddr": "192.168.100.8", 00:23:10.408 "trsvcid": "54879" 00:23:10.408 }, 00:23:10.408 "auth": { 00:23:10.408 "state": "completed", 00:23:10.408 "digest": "sha512", 00:23:10.408 "dhgroup": "ffdhe2048" 00:23:10.408 } 00:23:10.408 } 00:23:10.408 ]' 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.408 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.667 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:10.667 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:11.234 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.494 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.752 00:23:11.752 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.752 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.752 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:12.012 { 00:23:12.012 "cntlid": 107, 00:23:12.012 "qid": 0, 00:23:12.012 "state": "enabled", 00:23:12.012 "thread": "nvmf_tgt_poll_group_000", 00:23:12.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:12.012 "listen_address": { 00:23:12.012 "trtype": "RDMA", 00:23:12.012 "adrfam": "IPv4", 00:23:12.012 "traddr": "192.168.100.8", 00:23:12.012 "trsvcid": "4420" 00:23:12.012 }, 00:23:12.012 "peer_address": { 00:23:12.012 "trtype": "RDMA", 00:23:12.012 "adrfam": "IPv4", 00:23:12.012 "traddr": "192.168.100.8", 00:23:12.012 "trsvcid": "32874" 00:23:12.012 }, 00:23:12.012 "auth": { 00:23:12.012 "state": "completed", 00:23:12.012 "digest": "sha512", 00:23:12.012 "dhgroup": "ffdhe2048" 00:23:12.012 } 00:23:12.012 } 00:23:12.012 ]' 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:12.012 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.271 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.271 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.271 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.271 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:12.271 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:13.244 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.244 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:13.244 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.244 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.244 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.244 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.244 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:13.244 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.244 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.536 00:23:13.536 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.536 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.536 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.818 { 00:23:13.818 "cntlid": 109, 00:23:13.818 "qid": 0, 00:23:13.818 "state": "enabled", 00:23:13.818 "thread": "nvmf_tgt_poll_group_000", 00:23:13.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:13.818 "listen_address": { 00:23:13.818 "trtype": "RDMA", 00:23:13.818 "adrfam": "IPv4", 00:23:13.818 "traddr": "192.168.100.8", 00:23:13.818 "trsvcid": "4420" 00:23:13.818 }, 00:23:13.818 "peer_address": { 00:23:13.818 "trtype": "RDMA", 00:23:13.818 "adrfam": "IPv4", 00:23:13.818 "traddr": "192.168.100.8", 00:23:13.818 "trsvcid": "49692" 00:23:13.818 }, 00:23:13.818 "auth": { 00:23:13.818 "state": "completed", 00:23:13.818 "digest": "sha512", 00:23:13.818 "dhgroup": "ffdhe2048" 00:23:13.818 } 00:23:13.818 } 00:23:13.818 ]' 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.818 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.077 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:14.077 10:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:14.645 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.905 10:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.164 00:23:15.164 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.164 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.164 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.423 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.423 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.423 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.423 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.423 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.423 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.423 { 00:23:15.423 "cntlid": 111, 00:23:15.423 "qid": 0, 00:23:15.423 "state": "enabled", 00:23:15.423 "thread": "nvmf_tgt_poll_group_000", 00:23:15.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:15.423 "listen_address": { 00:23:15.423 "trtype": "RDMA", 00:23:15.423 "adrfam": "IPv4", 00:23:15.423 "traddr": "192.168.100.8", 00:23:15.423 "trsvcid": "4420" 00:23:15.423 }, 00:23:15.423 "peer_address": { 00:23:15.423 "trtype": "RDMA", 00:23:15.423 "adrfam": "IPv4", 00:23:15.423 "traddr": "192.168.100.8", 00:23:15.423 "trsvcid": "40329" 00:23:15.423 }, 00:23:15.423 "auth": { 00:23:15.423 "state": "completed", 00:23:15.423 "digest": "sha512", 00:23:15.423 "dhgroup": "ffdhe2048" 00:23:15.423 } 00:23:15.423 } 00:23:15.423 ]' 00:23:15.423 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:15.423 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.423 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:15.682 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:15.682 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:15.682 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.682 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.682 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.941 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:15.941 10:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:16.509 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.509 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:16.509 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.509 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.509 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.509 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.509 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:16.509 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:16.509 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.768 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.026 00:23:17.026 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:17.026 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.026 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:17.285 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.285 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.285 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.285 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.285 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.285 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:17.285 { 00:23:17.285 "cntlid": 113, 00:23:17.285 "qid": 0, 00:23:17.285 "state": "enabled", 00:23:17.285 "thread": "nvmf_tgt_poll_group_000", 00:23:17.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:17.285 "listen_address": { 00:23:17.285 "trtype": "RDMA", 00:23:17.285 "adrfam": "IPv4", 00:23:17.285 "traddr": "192.168.100.8", 00:23:17.285 "trsvcid": "4420" 00:23:17.285 }, 00:23:17.285 "peer_address": { 00:23:17.285 "trtype": "RDMA", 00:23:17.285 "adrfam": "IPv4", 00:23:17.285 "traddr": "192.168.100.8", 00:23:17.285 "trsvcid": "56040" 00:23:17.285 }, 00:23:17.285 "auth": { 00:23:17.285 "state": "completed", 00:23:17.285 "digest": "sha512", 00:23:17.285 "dhgroup": "ffdhe3072" 00:23:17.285 } 00:23:17.285 } 00:23:17.285 ]' 00:23:17.285 10:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:17.285 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.285 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:17.285 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:17.285 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:17.285 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.285 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.285 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.544 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:17.544 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:18.112 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.371 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.939 00:23:18.939 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:18.939 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.939 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:18.939 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.939 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.940 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.940 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.940 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.940 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:18.940 { 00:23:18.940 "cntlid": 115, 00:23:18.940 "qid": 0, 00:23:18.940 "state": "enabled", 00:23:18.940 "thread": "nvmf_tgt_poll_group_000", 00:23:18.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:18.940 "listen_address": { 00:23:18.940 "trtype": "RDMA", 00:23:18.940 "adrfam": "IPv4", 00:23:18.940 "traddr": "192.168.100.8", 00:23:18.940 "trsvcid": "4420" 00:23:18.940 }, 00:23:18.940 "peer_address": { 00:23:18.940 "trtype": "RDMA", 00:23:18.940 "adrfam": "IPv4", 00:23:18.940 "traddr": "192.168.100.8", 00:23:18.940 "trsvcid": "35438" 00:23:18.940 }, 00:23:18.940 "auth": { 00:23:18.940 "state": "completed", 00:23:18.940 "digest": "sha512", 00:23:18.940 "dhgroup": "ffdhe3072" 00:23:18.940 } 00:23:18.940 } 00:23:18.940 ]' 00:23:18.940 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:18.940 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:18.940 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:18.940 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:18.940 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.198 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.198 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.198 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.198 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:19.199 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:20.136 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.136 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:20.136 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.136 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.136 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.136 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.136 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:20.136 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:20.136 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:20.136 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.136 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:20.136 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:20.136 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:20.136 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.136 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.136 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.136 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.395 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.395 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.395 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.395 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.395 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:20.654 { 00:23:20.654 "cntlid": 117, 00:23:20.654 "qid": 0, 00:23:20.654 "state": "enabled", 00:23:20.654 "thread": "nvmf_tgt_poll_group_000", 00:23:20.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:20.654 "listen_address": { 00:23:20.654 "trtype": "RDMA", 00:23:20.654 "adrfam": "IPv4", 00:23:20.654 "traddr": "192.168.100.8", 00:23:20.654 "trsvcid": "4420" 00:23:20.654 }, 00:23:20.654 "peer_address": { 00:23:20.654 "trtype": "RDMA", 00:23:20.654 "adrfam": "IPv4", 00:23:20.654 "traddr": "192.168.100.8", 00:23:20.654 "trsvcid": "55431" 00:23:20.654 }, 00:23:20.654 "auth": { 00:23:20.654 "state": "completed", 00:23:20.654 "digest": "sha512", 00:23:20.654 "dhgroup": "ffdhe3072" 00:23:20.654 } 00:23:20.654 } 00:23:20.654 ]' 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:20.654 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:20.913 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:20.913 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:20.913 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.913 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.913 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.172 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:21.172 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:21.739 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.739 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:21.739 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.739 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.739 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.739 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:21.739 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:21.739 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:21.998 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.257 00:23:22.257 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.257 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.257 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.516 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.516 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.516 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.516 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.516 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.516 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.516 { 00:23:22.516 "cntlid": 119, 00:23:22.516 "qid": 0, 00:23:22.516 "state": "enabled", 00:23:22.516 "thread": "nvmf_tgt_poll_group_000", 00:23:22.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:22.516 "listen_address": { 00:23:22.516 "trtype": "RDMA", 00:23:22.516 "adrfam": "IPv4", 00:23:22.516 "traddr": "192.168.100.8", 00:23:22.516 "trsvcid": "4420" 00:23:22.516 }, 00:23:22.516 "peer_address": { 00:23:22.516 "trtype": "RDMA", 00:23:22.516 "adrfam": "IPv4", 00:23:22.516 "traddr": "192.168.100.8", 00:23:22.517 "trsvcid": "52433" 00:23:22.517 }, 00:23:22.517 "auth": { 00:23:22.517 "state": "completed", 00:23:22.517 "digest": "sha512", 00:23:22.517 "dhgroup": "ffdhe3072" 00:23:22.517 } 00:23:22.517 } 00:23:22.517 ]' 00:23:22.517 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.517 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.517 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.517 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:22.517 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.517 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.517 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.517 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.776 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:22.776 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:23.343 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.603 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.862 00:23:24.122 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:24.122 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:24.122 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.122 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.122 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.122 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.122 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.122 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.122 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.122 { 00:23:24.122 "cntlid": 121, 00:23:24.122 "qid": 0, 00:23:24.122 "state": "enabled", 00:23:24.122 "thread": "nvmf_tgt_poll_group_000", 00:23:24.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:24.122 "listen_address": { 00:23:24.122 "trtype": "RDMA", 00:23:24.122 "adrfam": "IPv4", 00:23:24.122 "traddr": "192.168.100.8", 00:23:24.122 "trsvcid": "4420" 00:23:24.122 }, 00:23:24.122 "peer_address": { 00:23:24.122 "trtype": "RDMA", 00:23:24.122 "adrfam": "IPv4", 00:23:24.122 "traddr": "192.168.100.8", 00:23:24.122 "trsvcid": "37421" 00:23:24.122 }, 00:23:24.122 "auth": { 00:23:24.122 "state": "completed", 00:23:24.122 "digest": "sha512", 00:23:24.122 "dhgroup": "ffdhe4096" 00:23:24.122 } 00:23:24.122 } 00:23:24.122 ]' 00:23:24.122 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.381 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:24.381 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.381 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:24.381 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.381 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.381 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.381 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.640 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:24.640 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:25.209 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.209 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:25.209 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.209 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.209 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.209 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:25.209 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:25.209 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.468 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.727 00:23:25.727 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:25.727 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.727 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:25.987 { 00:23:25.987 "cntlid": 123, 00:23:25.987 "qid": 0, 00:23:25.987 "state": "enabled", 00:23:25.987 "thread": "nvmf_tgt_poll_group_000", 00:23:25.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:25.987 "listen_address": { 00:23:25.987 "trtype": "RDMA", 00:23:25.987 "adrfam": "IPv4", 00:23:25.987 "traddr": "192.168.100.8", 00:23:25.987 "trsvcid": "4420" 00:23:25.987 }, 00:23:25.987 "peer_address": { 00:23:25.987 "trtype": "RDMA", 00:23:25.987 "adrfam": "IPv4", 00:23:25.987 "traddr": "192.168.100.8", 00:23:25.987 "trsvcid": "49816" 00:23:25.987 }, 00:23:25.987 "auth": { 00:23:25.987 "state": "completed", 00:23:25.987 "digest": "sha512", 00:23:25.987 "dhgroup": "ffdhe4096" 00:23:25.987 } 00:23:25.987 } 00:23:25.987 ]' 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.987 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.246 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:26.246 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:26.814 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:27.073 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.074 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.074 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.074 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.074 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.074 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.074 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.074 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.332 00:23:27.591 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:27.592 { 00:23:27.592 "cntlid": 125, 00:23:27.592 "qid": 0, 00:23:27.592 "state": "enabled", 00:23:27.592 "thread": "nvmf_tgt_poll_group_000", 00:23:27.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:27.592 "listen_address": { 00:23:27.592 "trtype": "RDMA", 00:23:27.592 "adrfam": "IPv4", 00:23:27.592 "traddr": "192.168.100.8", 00:23:27.592 "trsvcid": "4420" 00:23:27.592 }, 00:23:27.592 "peer_address": { 00:23:27.592 "trtype": "RDMA", 00:23:27.592 "adrfam": "IPv4", 00:23:27.592 "traddr": "192.168.100.8", 00:23:27.592 "trsvcid": "37890" 00:23:27.592 }, 00:23:27.592 "auth": { 00:23:27.592 "state": "completed", 00:23:27.592 "digest": "sha512", 00:23:27.592 "dhgroup": "ffdhe4096" 00:23:27.592 } 00:23:27.592 } 00:23:27.592 ]' 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:27.592 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:27.851 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:27.851 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:27.851 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.851 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.851 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.111 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:28.111 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:28.681 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.681 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:28.681 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.681 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.681 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.681 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:28.681 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.681 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:28.940 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:29.199 00:23:29.199 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.199 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.199 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.458 { 00:23:29.458 "cntlid": 127, 00:23:29.458 "qid": 0, 00:23:29.458 "state": "enabled", 00:23:29.458 "thread": "nvmf_tgt_poll_group_000", 00:23:29.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:29.458 "listen_address": { 00:23:29.458 "trtype": "RDMA", 00:23:29.458 "adrfam": "IPv4", 00:23:29.458 "traddr": "192.168.100.8", 00:23:29.458 "trsvcid": "4420" 00:23:29.458 }, 00:23:29.458 "peer_address": { 00:23:29.458 "trtype": "RDMA", 00:23:29.458 "adrfam": "IPv4", 00:23:29.458 "traddr": "192.168.100.8", 00:23:29.458 "trsvcid": "50644" 00:23:29.458 }, 00:23:29.458 "auth": { 00:23:29.458 "state": "completed", 00:23:29.458 "digest": "sha512", 00:23:29.458 "dhgroup": "ffdhe4096" 00:23:29.458 } 00:23:29.458 } 00:23:29.458 ]' 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.458 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.718 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:29.718 10:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:30.286 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.545 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.115 00:23:31.115 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:31.115 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:31.115 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.115 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.115 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.115 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.115 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.115 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.115 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:31.115 { 00:23:31.115 "cntlid": 129, 00:23:31.115 "qid": 0, 00:23:31.115 "state": "enabled", 00:23:31.115 "thread": "nvmf_tgt_poll_group_000", 00:23:31.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:31.115 "listen_address": { 00:23:31.115 "trtype": "RDMA", 00:23:31.115 "adrfam": "IPv4", 00:23:31.115 "traddr": "192.168.100.8", 00:23:31.115 "trsvcid": "4420" 00:23:31.115 }, 00:23:31.115 "peer_address": { 00:23:31.115 "trtype": "RDMA", 00:23:31.115 "adrfam": "IPv4", 00:23:31.115 "traddr": "192.168.100.8", 00:23:31.115 "trsvcid": "60408" 00:23:31.115 }, 00:23:31.115 "auth": { 00:23:31.115 "state": "completed", 00:23:31.115 "digest": "sha512", 00:23:31.115 "dhgroup": "ffdhe6144" 00:23:31.115 } 00:23:31.115 } 00:23:31.115 ]' 00:23:31.115 10:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:31.374 10:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:31.375 10:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:31.375 10:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:31.375 10:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:31.375 10:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.375 10:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.375 10:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.634 10:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:31.634 10:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:32.201 10:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.201 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:32.201 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.201 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.201 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.201 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:32.201 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.201 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.461 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:32.461 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:32.461 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:32.461 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:32.462 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:32.462 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.462 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.462 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.462 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.462 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.462 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.462 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.462 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.721 00:23:32.980 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:32.980 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:32.980 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.980 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.980 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.980 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.980 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.980 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.980 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:32.980 { 00:23:32.980 "cntlid": 131, 00:23:32.980 "qid": 0, 00:23:32.980 "state": "enabled", 00:23:32.980 "thread": "nvmf_tgt_poll_group_000", 00:23:32.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:32.980 "listen_address": { 00:23:32.980 "trtype": "RDMA", 00:23:32.980 "adrfam": "IPv4", 00:23:32.980 "traddr": "192.168.100.8", 00:23:32.980 "trsvcid": "4420" 00:23:32.980 }, 00:23:32.980 "peer_address": { 00:23:32.980 "trtype": "RDMA", 00:23:32.980 "adrfam": "IPv4", 00:23:32.980 "traddr": "192.168.100.8", 00:23:32.980 "trsvcid": "58397" 00:23:32.980 }, 00:23:32.980 "auth": { 00:23:32.980 "state": "completed", 00:23:32.980 "digest": "sha512", 00:23:32.980 "dhgroup": "ffdhe6144" 00:23:32.980 } 00:23:32.980 } 00:23:32.980 ]' 00:23:32.980 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:33.238 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:33.238 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:33.238 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:33.238 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:33.238 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:33.238 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.238 10:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.497 10:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:33.497 10:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:34.067 10:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.067 10:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:34.067 10:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.067 10:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.067 10:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.067 10:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:34.067 10:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:34.067 10:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.326 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.585 00:23:34.585 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:34.585 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:34.586 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.845 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.845 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.845 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.845 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.845 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.845 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:34.845 { 00:23:34.845 "cntlid": 133, 00:23:34.845 "qid": 0, 00:23:34.845 "state": "enabled", 00:23:34.845 "thread": "nvmf_tgt_poll_group_000", 00:23:34.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:34.845 "listen_address": { 00:23:34.845 "trtype": "RDMA", 00:23:34.845 "adrfam": "IPv4", 00:23:34.845 "traddr": "192.168.100.8", 00:23:34.845 "trsvcid": "4420" 00:23:34.845 }, 00:23:34.845 "peer_address": { 00:23:34.845 "trtype": "RDMA", 00:23:34.845 "adrfam": "IPv4", 00:23:34.845 "traddr": "192.168.100.8", 00:23:34.845 "trsvcid": "39346" 00:23:34.845 }, 00:23:34.845 "auth": { 00:23:34.845 "state": "completed", 00:23:34.845 "digest": "sha512", 00:23:34.845 "dhgroup": "ffdhe6144" 00:23:34.846 } 00:23:34.846 } 00:23:34.846 ]' 00:23:34.846 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:34.846 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:34.846 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:34.846 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:34.846 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:34.846 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.846 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.846 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:35.105 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:35.105 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:35.673 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.932 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:35.932 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.932 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.932 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.932 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:35.932 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:35.932 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:36.192 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:36.450 00:23:36.450 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.450 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.450 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.708 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.708 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.708 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.708 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.708 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.708 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.708 { 00:23:36.708 "cntlid": 135, 00:23:36.708 "qid": 0, 00:23:36.708 "state": "enabled", 00:23:36.708 "thread": "nvmf_tgt_poll_group_000", 00:23:36.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:36.708 "listen_address": { 00:23:36.708 "trtype": "RDMA", 00:23:36.708 "adrfam": "IPv4", 00:23:36.708 "traddr": "192.168.100.8", 00:23:36.708 "trsvcid": "4420" 00:23:36.708 }, 00:23:36.708 "peer_address": { 00:23:36.708 "trtype": "RDMA", 00:23:36.708 "adrfam": "IPv4", 00:23:36.708 "traddr": "192.168.100.8", 00:23:36.708 "trsvcid": "39253" 00:23:36.708 }, 00:23:36.708 "auth": { 00:23:36.708 "state": "completed", 00:23:36.708 "digest": "sha512", 00:23:36.708 "dhgroup": "ffdhe6144" 00:23:36.708 } 00:23:36.708 } 00:23:36.708 ]' 00:23:36.708 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.709 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:36.709 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.709 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:36.709 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.709 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.709 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.709 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.968 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:36.968 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:37.536 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.795 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:37.795 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.795 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.795 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.795 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.795 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:37.795 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:37.795 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.053 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.054 10:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.312 00:23:38.312 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:38.312 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:38.312 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:38.572 { 00:23:38.572 "cntlid": 137, 00:23:38.572 "qid": 0, 00:23:38.572 "state": "enabled", 00:23:38.572 "thread": "nvmf_tgt_poll_group_000", 00:23:38.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:38.572 "listen_address": { 00:23:38.572 "trtype": "RDMA", 00:23:38.572 "adrfam": "IPv4", 00:23:38.572 "traddr": "192.168.100.8", 00:23:38.572 "trsvcid": "4420" 00:23:38.572 }, 00:23:38.572 "peer_address": { 00:23:38.572 "trtype": "RDMA", 00:23:38.572 "adrfam": "IPv4", 00:23:38.572 "traddr": "192.168.100.8", 00:23:38.572 "trsvcid": "49047" 00:23:38.572 }, 00:23:38.572 "auth": { 00:23:38.572 "state": "completed", 00:23:38.572 "digest": "sha512", 00:23:38.572 "dhgroup": "ffdhe8192" 00:23:38.572 } 00:23:38.572 } 00:23:38.572 ]' 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:38.572 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:38.830 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.830 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.830 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.830 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:38.830 10:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.766 10:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.333 00:23:40.333 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:40.333 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:40.333 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:40.591 { 00:23:40.591 "cntlid": 139, 00:23:40.591 "qid": 0, 00:23:40.591 "state": "enabled", 00:23:40.591 "thread": "nvmf_tgt_poll_group_000", 00:23:40.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:40.591 "listen_address": { 00:23:40.591 "trtype": "RDMA", 00:23:40.591 "adrfam": "IPv4", 00:23:40.591 "traddr": "192.168.100.8", 00:23:40.591 "trsvcid": "4420" 00:23:40.591 }, 00:23:40.591 "peer_address": { 00:23:40.591 "trtype": "RDMA", 00:23:40.591 "adrfam": "IPv4", 00:23:40.591 "traddr": "192.168.100.8", 00:23:40.591 "trsvcid": "44252" 00:23:40.591 }, 00:23:40.591 "auth": { 00:23:40.591 "state": "completed", 00:23:40.591 "digest": "sha512", 00:23:40.591 "dhgroup": "ffdhe8192" 00:23:40.591 } 00:23:40.591 } 00:23:40.591 ]' 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.591 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.850 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:40.850 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: --dhchap-ctrl-secret DHHC-1:02:YzdjMmQ4MDYzMTE1N2I3Nzc0NGI3Nzk5ZWE1ZDMzZmVmOTcyMjllZGQ3MjYxMGExhP+uug==: 00:23:41.418 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.677 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:41.677 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.677 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.677 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.677 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:41.677 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:41.677 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.936 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.195 00:23:42.195 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:42.195 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:42.195 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.454 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.454 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.454 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.454 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.454 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.454 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:42.454 { 00:23:42.454 "cntlid": 141, 00:23:42.454 "qid": 0, 00:23:42.454 "state": "enabled", 00:23:42.454 "thread": "nvmf_tgt_poll_group_000", 00:23:42.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:42.454 "listen_address": { 00:23:42.454 "trtype": "RDMA", 00:23:42.454 "adrfam": "IPv4", 00:23:42.454 "traddr": "192.168.100.8", 00:23:42.454 "trsvcid": "4420" 00:23:42.454 }, 00:23:42.454 "peer_address": { 00:23:42.454 "trtype": "RDMA", 00:23:42.454 "adrfam": "IPv4", 00:23:42.454 "traddr": "192.168.100.8", 00:23:42.454 "trsvcid": "37787" 00:23:42.454 }, 00:23:42.454 "auth": { 00:23:42.454 "state": "completed", 00:23:42.454 "digest": "sha512", 00:23:42.454 "dhgroup": "ffdhe8192" 00:23:42.454 } 00:23:42.454 } 00:23:42.454 ]' 00:23:42.454 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:42.454 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:42.454 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:42.714 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:42.714 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:42.714 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.714 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.714 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.714 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:42.714 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:01:MTkxY2RiZjVmMDJjYjY2ZWY3YTg0OGJiYzBhZWUwNzIyocsn: 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.652 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.911 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.911 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:43.911 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:43.911 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:44.170 00:23:44.170 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:44.170 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.170 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:44.429 { 00:23:44.429 "cntlid": 143, 00:23:44.429 "qid": 0, 00:23:44.429 "state": "enabled", 00:23:44.429 "thread": "nvmf_tgt_poll_group_000", 00:23:44.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:44.429 "listen_address": { 00:23:44.429 "trtype": "RDMA", 00:23:44.429 "adrfam": "IPv4", 00:23:44.429 "traddr": "192.168.100.8", 00:23:44.429 "trsvcid": "4420" 00:23:44.429 }, 00:23:44.429 "peer_address": { 00:23:44.429 "trtype": "RDMA", 00:23:44.429 "adrfam": "IPv4", 00:23:44.429 "traddr": "192.168.100.8", 00:23:44.429 "trsvcid": "58820" 00:23:44.429 }, 00:23:44.429 "auth": { 00:23:44.429 "state": "completed", 00:23:44.429 "digest": "sha512", 00:23:44.429 "dhgroup": "ffdhe8192" 00:23:44.429 } 00:23:44.429 } 00:23:44.429 ]' 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:44.429 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:44.688 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.688 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.688 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:44.946 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:44.946 10:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:45.514 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:45.772 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:45.772 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:45.772 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:45.773 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:45.773 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:45.773 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.773 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.773 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.773 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.773 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.773 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.773 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.773 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.340 00:23:46.340 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:46.340 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:46.340 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.340 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.340 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.340 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.340 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.340 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.340 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:46.340 { 00:23:46.340 "cntlid": 145, 00:23:46.340 "qid": 0, 00:23:46.340 "state": "enabled", 00:23:46.340 "thread": "nvmf_tgt_poll_group_000", 00:23:46.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:46.340 "listen_address": { 00:23:46.340 "trtype": "RDMA", 00:23:46.340 "adrfam": "IPv4", 00:23:46.340 "traddr": "192.168.100.8", 00:23:46.340 "trsvcid": "4420" 00:23:46.340 }, 00:23:46.340 "peer_address": { 00:23:46.340 "trtype": "RDMA", 00:23:46.340 "adrfam": "IPv4", 00:23:46.340 "traddr": "192.168.100.8", 00:23:46.340 "trsvcid": "41877" 00:23:46.340 }, 00:23:46.340 "auth": { 00:23:46.340 "state": "completed", 00:23:46.340 "digest": "sha512", 00:23:46.340 "dhgroup": "ffdhe8192" 00:23:46.340 } 00:23:46.340 } 00:23:46.340 ]' 00:23:46.340 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:46.598 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:46.598 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:46.598 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:46.598 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:46.598 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.598 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.598 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.857 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:46.857 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MDM2ZDA4YmY0MTg0NTdmNGY5YTExOGQ1NGVmYmRiMGFhYWE2NmE2Mzc4NjlmMmZjfpEwIg==: --dhchap-ctrl-secret DHHC-1:03:NGRmNzkxMjAyYTg3ZTFkNzg2OGFmYTc2MzdhMTQyZDViMmFmNWM1MDc0OWRmMTZiZGVjZTExY2M3ZTNiYWIxZT9hAKw=: 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:47.425 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:47.993 request: 00:23:47.993 { 00:23:47.993 "name": "nvme0", 00:23:47.993 "trtype": "rdma", 00:23:47.993 "traddr": "192.168.100.8", 00:23:47.993 "adrfam": "ipv4", 00:23:47.993 "trsvcid": "4420", 00:23:47.993 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:47.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:47.993 "prchk_reftag": false, 00:23:47.993 "prchk_guard": false, 00:23:47.993 "hdgst": false, 00:23:47.993 "ddgst": false, 00:23:47.993 "dhchap_key": "key2", 00:23:47.993 "allow_unrecognized_csi": false, 00:23:47.993 "method": "bdev_nvme_attach_controller", 00:23:47.993 "req_id": 1 00:23:47.993 } 00:23:47.993 Got JSON-RPC error response 00:23:47.993 response: 00:23:47.993 { 00:23:47.993 "code": -5, 00:23:47.993 "message": "Input/output error" 00:23:47.993 } 00:23:47.993 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:47.993 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:47.993 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:47.994 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:48.630 request: 00:23:48.630 { 00:23:48.630 "name": "nvme0", 00:23:48.630 "trtype": "rdma", 00:23:48.630 "traddr": "192.168.100.8", 00:23:48.630 "adrfam": "ipv4", 00:23:48.630 "trsvcid": "4420", 00:23:48.630 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:48.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:48.630 "prchk_reftag": false, 00:23:48.630 "prchk_guard": false, 00:23:48.630 "hdgst": false, 00:23:48.630 "ddgst": false, 00:23:48.630 "dhchap_key": "key1", 00:23:48.630 "dhchap_ctrlr_key": "ckey2", 00:23:48.630 "allow_unrecognized_csi": false, 00:23:48.630 "method": "bdev_nvme_attach_controller", 00:23:48.630 "req_id": 1 00:23:48.630 } 00:23:48.630 Got JSON-RPC error response 00:23:48.630 response: 00:23:48.630 { 00:23:48.630 "code": -5, 00:23:48.630 "message": "Input/output error" 00:23:48.630 } 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.630 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.008 request: 00:23:49.008 { 00:23:49.008 "name": "nvme0", 00:23:49.008 "trtype": "rdma", 00:23:49.008 "traddr": "192.168.100.8", 00:23:49.008 "adrfam": "ipv4", 00:23:49.008 "trsvcid": "4420", 00:23:49.008 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:49.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:49.008 "prchk_reftag": false, 00:23:49.008 "prchk_guard": false, 00:23:49.008 "hdgst": false, 00:23:49.008 "ddgst": false, 00:23:49.008 "dhchap_key": "key1", 00:23:49.008 "dhchap_ctrlr_key": "ckey1", 00:23:49.008 "allow_unrecognized_csi": false, 00:23:49.008 "method": "bdev_nvme_attach_controller", 00:23:49.008 "req_id": 1 00:23:49.008 } 00:23:49.008 Got JSON-RPC error response 00:23:49.008 response: 00:23:49.008 { 00:23:49.008 "code": -5, 00:23:49.008 "message": "Input/output error" 00:23:49.008 } 00:23:49.008 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:49.008 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.008 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:49.008 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.008 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:49.008 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.008 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.008 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.008 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1720484 00:23:49.008 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1720484 ']' 00:23:49.009 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1720484 00:23:49.009 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:49.009 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.009 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1720484 00:23:49.009 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.009 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.009 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1720484' 00:23:49.009 killing process with pid 1720484 00:23:49.009 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1720484 00:23:49.009 10:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1720484 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1744668 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1744668 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1744668 ']' 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.390 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.327 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.327 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:51.327 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.327 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.327 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.327 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.327 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:51.327 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1744668 00:23:51.327 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1744668 ']' 00:23:51.327 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.327 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.327 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.327 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.327 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.587 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.587 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:51.587 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:51.587 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.587 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.845 null0 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wMD 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ChZ ]] 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ChZ 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.lgf 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.VVl ]] 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VVl 00:23:51.845 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.846 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nX4 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.HkE ]] 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HkE 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5Si 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:52.105 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:52.673 nvme0n1 00:23:52.673 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:52.673 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:52.673 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:52.933 { 00:23:52.933 "cntlid": 1, 00:23:52.933 "qid": 0, 00:23:52.933 "state": "enabled", 00:23:52.933 "thread": "nvmf_tgt_poll_group_000", 00:23:52.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:52.933 "listen_address": { 00:23:52.933 "trtype": "RDMA", 00:23:52.933 "adrfam": "IPv4", 00:23:52.933 "traddr": "192.168.100.8", 00:23:52.933 "trsvcid": "4420" 00:23:52.933 }, 00:23:52.933 "peer_address": { 00:23:52.933 "trtype": "RDMA", 00:23:52.933 "adrfam": "IPv4", 00:23:52.933 "traddr": "192.168.100.8", 00:23:52.933 "trsvcid": "34639" 00:23:52.933 }, 00:23:52.933 "auth": { 00:23:52.933 "state": "completed", 00:23:52.933 "digest": "sha512", 00:23:52.933 "dhgroup": "ffdhe8192" 00:23:52.933 } 00:23:52.933 } 00:23:52.933 ]' 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:52.933 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:53.191 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:53.191 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:53.192 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:53.192 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:53.192 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:54.128 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:54.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:54.129 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:54.129 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.129 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.129 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.129 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:54.129 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.129 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.129 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.129 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:54.129 10:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:54.129 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:54.129 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:54.129 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:54.129 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:54.129 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.129 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:54.129 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.129 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:54.388 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:54.388 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:54.388 request: 00:23:54.388 { 00:23:54.388 "name": "nvme0", 00:23:54.388 "trtype": "rdma", 00:23:54.388 "traddr": "192.168.100.8", 00:23:54.388 "adrfam": "ipv4", 00:23:54.388 "trsvcid": "4420", 00:23:54.388 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:54.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:54.388 "prchk_reftag": false, 00:23:54.388 "prchk_guard": false, 00:23:54.388 "hdgst": false, 00:23:54.388 "ddgst": false, 00:23:54.388 "dhchap_key": "key3", 00:23:54.388 "allow_unrecognized_csi": false, 00:23:54.388 "method": "bdev_nvme_attach_controller", 00:23:54.388 "req_id": 1 00:23:54.388 } 00:23:54.388 Got JSON-RPC error response 00:23:54.388 response: 00:23:54.388 { 00:23:54.388 "code": -5, 00:23:54.388 "message": "Input/output error" 00:23:54.388 } 00:23:54.388 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:54.388 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:54.388 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:54.388 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:54.388 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:54.388 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:54.388 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:54.388 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:54.648 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:54.648 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:54.648 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:54.648 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:54.648 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.648 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:54.648 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.648 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:54.648 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:54.648 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:54.907 request: 00:23:54.907 { 00:23:54.907 "name": "nvme0", 00:23:54.907 "trtype": "rdma", 00:23:54.907 "traddr": "192.168.100.8", 00:23:54.907 "adrfam": "ipv4", 00:23:54.907 "trsvcid": "4420", 00:23:54.907 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:54.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:54.907 "prchk_reftag": false, 00:23:54.907 "prchk_guard": false, 00:23:54.907 "hdgst": false, 00:23:54.907 "ddgst": false, 00:23:54.907 "dhchap_key": "key3", 00:23:54.907 "allow_unrecognized_csi": false, 00:23:54.907 "method": "bdev_nvme_attach_controller", 00:23:54.907 "req_id": 1 00:23:54.907 } 00:23:54.907 Got JSON-RPC error response 00:23:54.907 response: 00:23:54.907 { 00:23:54.907 "code": -5, 00:23:54.907 "message": "Input/output error" 00:23:54.907 } 00:23:54.907 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:54.907 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:54.907 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:54.907 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:54.907 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:54.907 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:54.907 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:54.907 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:54.907 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:54.907 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:55.167 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:55.168 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:55.427 request: 00:23:55.427 { 00:23:55.427 "name": "nvme0", 00:23:55.427 "trtype": "rdma", 00:23:55.427 "traddr": "192.168.100.8", 00:23:55.427 "adrfam": "ipv4", 00:23:55.427 "trsvcid": "4420", 00:23:55.427 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:55.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:55.427 "prchk_reftag": false, 00:23:55.427 "prchk_guard": false, 00:23:55.427 "hdgst": false, 00:23:55.427 "ddgst": false, 00:23:55.427 "dhchap_key": "key0", 00:23:55.427 "dhchap_ctrlr_key": "key1", 00:23:55.427 "allow_unrecognized_csi": false, 00:23:55.427 "method": "bdev_nvme_attach_controller", 00:23:55.427 "req_id": 1 00:23:55.427 } 00:23:55.427 Got JSON-RPC error response 00:23:55.427 response: 00:23:55.427 { 00:23:55.427 "code": -5, 00:23:55.427 "message": "Input/output error" 00:23:55.427 } 00:23:55.427 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:55.427 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.427 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:55.427 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.427 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:55.427 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:55.427 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:55.685 nvme0n1 00:23:55.685 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:55.685 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:55.685 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:55.944 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.944 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:55.944 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.202 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:23:56.202 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.202 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.202 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.202 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:56.202 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:56.203 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:57.136 nvme0n1 00:23:57.136 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:57.137 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:57.137 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.137 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.137 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:57.137 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.137 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.137 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.137 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:57.137 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:57.137 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.395 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.395 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:57.395 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: --dhchap-ctrl-secret DHHC-1:03:MmJlNWZiM2ZiOGQwNTVjZmVlMGJhMGRmMTFiMDIyZTM4ZWUzMjc1MTljODM1Mjc4MjYxODRlMTAzYzVmOTViMU39fRY=: 00:23:57.962 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:57.962 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:57.962 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:57.962 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:57.962 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:57.962 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:57.962 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:57.962 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:57.963 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.222 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:58.222 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:58.222 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:58.222 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:58.222 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.222 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:58.222 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.222 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:58.222 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:58.222 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:58.790 request: 00:23:58.790 { 00:23:58.790 "name": "nvme0", 00:23:58.790 "trtype": "rdma", 00:23:58.790 "traddr": "192.168.100.8", 00:23:58.790 "adrfam": "ipv4", 00:23:58.790 "trsvcid": "4420", 00:23:58.790 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:58.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:58.790 "prchk_reftag": false, 00:23:58.790 "prchk_guard": false, 00:23:58.790 "hdgst": false, 00:23:58.790 "ddgst": false, 00:23:58.790 "dhchap_key": "key1", 00:23:58.790 "allow_unrecognized_csi": false, 00:23:58.790 "method": "bdev_nvme_attach_controller", 00:23:58.790 "req_id": 1 00:23:58.790 } 00:23:58.790 Got JSON-RPC error response 00:23:58.790 response: 00:23:58.790 { 00:23:58.790 "code": -5, 00:23:58.790 "message": "Input/output error" 00:23:58.790 } 00:23:58.790 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:58.790 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:58.790 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:58.790 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:58.790 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:58.790 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:58.790 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:59.358 nvme0n1 00:23:59.358 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:59.358 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:59.358 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.617 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.617 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:59.617 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.876 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:59.876 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.876 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.876 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.876 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:59.877 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:59.877 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:00.136 nvme0n1 00:24:00.136 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:00.136 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:00.136 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.395 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.395 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.395 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: '' 2s 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: ]] 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODM4NTk0ZjUyYjQ3YTBhZGMyYWZjMjMxMjQ0MmVmYjcGPwqp: 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:00.654 10:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: 2s 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: ]] 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Yzc5NzVjODNkZTI5ZjA5YjQ5ZjAyZDVjMzQyYmMzNzQ0Njc1ZDBiZjg3YWQ5NjY5VOJbzw==: 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:02.559 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:05.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:05.094 10:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:05.663 nvme0n1 00:24:05.663 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:05.663 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.663 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.663 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.663 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:05.663 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:05.922 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:05.922 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:05.922 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.182 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.182 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:06.182 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.182 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.182 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.182 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:06.182 10:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:06.440 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:06.440 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.440 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:06.698 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:06.957 request: 00:24:06.957 { 00:24:06.957 "name": "nvme0", 00:24:06.957 "dhchap_key": "key1", 00:24:06.957 "dhchap_ctrlr_key": "key3", 00:24:06.957 "method": "bdev_nvme_set_keys", 00:24:06.957 "req_id": 1 00:24:06.957 } 00:24:06.957 Got JSON-RPC error response 00:24:06.957 response: 00:24:06.957 { 00:24:06.957 "code": -13, 00:24:06.957 "message": "Permission denied" 00:24:06.957 } 00:24:06.957 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:06.957 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.957 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.957 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.957 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:06.957 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:06.957 10:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:07.215 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:07.216 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:08.153 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:08.153 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:08.153 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.412 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:08.412 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:08.412 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.412 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.412 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.412 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:08.412 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:08.412 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:09.347 nvme0n1 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:09.347 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:09.606 request: 00:24:09.606 { 00:24:09.606 "name": "nvme0", 00:24:09.606 "dhchap_key": "key2", 00:24:09.606 "dhchap_ctrlr_key": "key0", 00:24:09.606 "method": "bdev_nvme_set_keys", 00:24:09.606 "req_id": 1 00:24:09.606 } 00:24:09.606 Got JSON-RPC error response 00:24:09.606 response: 00:24:09.606 { 00:24:09.606 "code": -13, 00:24:09.606 "message": "Permission denied" 00:24:09.606 } 00:24:09.606 10:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:09.606 10:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.606 10:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.606 10:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.606 10:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:09.606 10:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:09.606 10:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.865 10:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:09.865 10:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:10.800 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:10.800 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:10.800 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1720760 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1720760 ']' 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1720760 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1720760 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1720760' 00:24:11.059 killing process with pid 1720760 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1720760 00:24:11.059 10:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1720760 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:13.594 rmmod nvme_rdma 00:24:13.594 rmmod nvme_fabrics 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1744668 ']' 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1744668 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1744668 ']' 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1744668 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744668 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744668' 00:24:13.594 killing process with pid 1744668 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1744668 00:24:13.594 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1744668 00:24:14.529 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.529 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:14.529 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wMD /tmp/spdk.key-sha256.lgf /tmp/spdk.key-sha384.nX4 /tmp/spdk.key-sha512.5Si /tmp/spdk.key-sha512.ChZ /tmp/spdk.key-sha384.VVl /tmp/spdk.key-sha256.HkE '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:24:14.529 00:24:14.529 real 2m50.515s 00:24:14.529 user 6m22.853s 00:24:14.529 sys 0m26.673s 00:24:14.529 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.529 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.529 ************************************ 00:24:14.529 END TEST nvmf_auth_target 00:24:14.529 ************************************ 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:14.788 ************************************ 00:24:14.788 START TEST nvmf_fuzz 00:24:14.788 ************************************ 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:24:14.788 * Looking for test storage... 00:24:14.788 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.788 --rc genhtml_branch_coverage=1 00:24:14.788 --rc genhtml_function_coverage=1 00:24:14.788 --rc genhtml_legend=1 00:24:14.788 --rc geninfo_all_blocks=1 00:24:14.788 --rc geninfo_unexecuted_blocks=1 00:24:14.788 00:24:14.788 ' 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.788 --rc genhtml_branch_coverage=1 00:24:14.788 --rc genhtml_function_coverage=1 00:24:14.788 --rc genhtml_legend=1 00:24:14.788 --rc geninfo_all_blocks=1 00:24:14.788 --rc geninfo_unexecuted_blocks=1 00:24:14.788 00:24:14.788 ' 00:24:14.788 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.788 --rc genhtml_branch_coverage=1 00:24:14.788 --rc genhtml_function_coverage=1 00:24:14.788 --rc genhtml_legend=1 00:24:14.788 --rc geninfo_all_blocks=1 00:24:14.789 --rc geninfo_unexecuted_blocks=1 00:24:14.789 00:24:14.789 ' 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:14.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.789 --rc genhtml_branch_coverage=1 00:24:14.789 --rc genhtml_function_coverage=1 00:24:14.789 --rc genhtml_legend=1 00:24:14.789 --rc geninfo_all_blocks=1 00:24:14.789 --rc geninfo_unexecuted_blocks=1 00:24:14.789 00:24:14.789 ' 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.789 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:15.047 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:15.047 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.154 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.154 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:23.154 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:23.154 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:23.154 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:23.154 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:23.154 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:23.154 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:23.154 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:23.154 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:23.155 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:23.155 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:23.155 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:23.155 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:23.155 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:23.155 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:23.155 altname enp217s0f0np0 00:24:23.155 altname ens818f0np0 00:24:23.155 inet 192.168.100.8/24 scope global mlx_0_0 00:24:23.155 valid_lft forever preferred_lft forever 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:23.155 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:23.156 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:23.156 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:23.156 altname enp217s0f1np1 00:24:23.156 altname ens818f1np1 00:24:23.156 inet 192.168.100.9/24 scope global mlx_0_1 00:24:23.156 valid_lft forever preferred_lft forever 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:23.156 192.168.100.9' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:23.156 192.168.100.9' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:23.156 192.168.100.9' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1752804 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1752804 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1752804 ']' 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.156 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.415 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.415 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:23.415 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:23.415 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.415 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.673 Malloc0 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:24:23.673 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:24:55.737 Fuzzing completed. Shutting down the fuzz application 00:24:55.737 00:24:55.737 Dumping successful admin opcodes: 00:24:55.737 9, 10, 00:24:55.737 Dumping successful io opcodes: 00:24:55.737 0, 9, 00:24:55.737 NS: 0x2000008f0ec0 I/O qp, Total commands completed: 811153, total successful commands: 4713, random_seed: 32074880 00:24:55.737 NS: 0x2000008f0ec0 admin qp, Total commands completed: 119840, total successful commands: 28, random_seed: 155568832 00:24:55.737 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:56.311 Fuzzing completed. Shutting down the fuzz application 00:24:56.312 00:24:56.312 Dumping successful admin opcodes: 00:24:56.312 00:24:56.312 Dumping successful io opcodes: 00:24:56.312 00:24:56.312 NS: 0x2000008f0ec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 423670168 00:24:56.312 NS: 0x2000008f0ec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 423761520 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:56.312 rmmod nvme_rdma 00:24:56.312 rmmod nvme_fabrics 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 1752804 ']' 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 1752804 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1752804 ']' 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 1752804 00:24:56.312 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:24:56.312 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.312 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1752804 00:24:56.312 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:56.312 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:56.312 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1752804' 00:24:56.312 killing process with pid 1752804 00:24:56.312 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 1752804 00:24:56.312 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 1752804 00:24:57.685 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:57.685 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:57.685 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:57.685 00:24:57.685 real 0m43.018s 00:24:57.685 user 0m55.228s 00:24:57.685 sys 0m20.269s 00:24:57.685 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.685 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:57.685 ************************************ 00:24:57.685 END TEST nvmf_fuzz 00:24:57.685 ************************************ 00:24:57.685 10:57:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:24:57.685 10:57:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.685 10:57:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.685 10:57:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:57.943 ************************************ 00:24:57.943 START TEST nvmf_multiconnection 00:24:57.943 ************************************ 00:24:57.943 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:24:57.943 * Looking for test storage... 00:24:57.943 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:57.943 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:57.943 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:57.943 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:24:57.943 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:57.943 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.943 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:57.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.944 --rc genhtml_branch_coverage=1 00:24:57.944 --rc genhtml_function_coverage=1 00:24:57.944 --rc genhtml_legend=1 00:24:57.944 --rc geninfo_all_blocks=1 00:24:57.944 --rc geninfo_unexecuted_blocks=1 00:24:57.944 00:24:57.944 ' 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:57.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.944 --rc genhtml_branch_coverage=1 00:24:57.944 --rc genhtml_function_coverage=1 00:24:57.944 --rc genhtml_legend=1 00:24:57.944 --rc geninfo_all_blocks=1 00:24:57.944 --rc geninfo_unexecuted_blocks=1 00:24:57.944 00:24:57.944 ' 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:57.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.944 --rc genhtml_branch_coverage=1 00:24:57.944 --rc genhtml_function_coverage=1 00:24:57.944 --rc genhtml_legend=1 00:24:57.944 --rc geninfo_all_blocks=1 00:24:57.944 --rc geninfo_unexecuted_blocks=1 00:24:57.944 00:24:57.944 ' 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:57.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.944 --rc genhtml_branch_coverage=1 00:24:57.944 --rc genhtml_function_coverage=1 00:24:57.944 --rc genhtml_legend=1 00:24:57.944 --rc geninfo_all_blocks=1 00:24:57.944 --rc geninfo_unexecuted_blocks=1 00:24:57.944 00:24:57.944 ' 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.944 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:57.944 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.945 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:07.910 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.910 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:07.910 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:07.911 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:07.911 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:07.911 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:07.911 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:07.911 altname enp217s0f0np0 00:25:07.911 altname ens818f0np0 00:25:07.911 inet 192.168.100.8/24 scope global mlx_0_0 00:25:07.911 valid_lft forever preferred_lft forever 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:07.911 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:07.911 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:07.911 altname enp217s0f1np1 00:25:07.911 altname ens818f1np1 00:25:07.911 inet 192.168.100.9/24 scope global mlx_0_1 00:25:07.911 valid_lft forever preferred_lft forever 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:07.911 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:07.912 192.168.100.9' 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:07.912 192.168.100.9' 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:07.912 192.168.100.9' 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=1763248 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 1763248 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 1763248 ']' 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.912 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 [2024-11-28 10:57:45.362331] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:25:07.912 [2024-11-28 10:57:45.362439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.912 [2024-11-28 10:57:45.513471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.912 [2024-11-28 10:57:45.613988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.912 [2024-11-28 10:57:45.614037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.912 [2024-11-28 10:57:45.614051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.912 [2024-11-28 10:57:45.614065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.912 [2024-11-28 10:57:45.614076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.912 [2024-11-28 10:57:45.616660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.912 [2024-11-28 10:57:45.616733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.912 [2024-11-28 10:57:45.616753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.912 [2024-11-28 10:57:45.616762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 [2024-11-28 10:57:46.265418] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f7085d8b940) succeed. 00:25:07.912 [2024-11-28 10:57:46.274788] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f7085d47940) succeed. 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 Malloc1 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 [2024-11-28 10:57:46.643639] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 Malloc2 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.912 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:07.913 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.913 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.171 Malloc3 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.171 Malloc4 00:25:08.171 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.172 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.172 Malloc5 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.172 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 Malloc6 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 Malloc7 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.431 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 Malloc8 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 Malloc9 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 Malloc10 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.690 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.949 Malloc11 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.949 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:09.883 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:09.883 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:09.883 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.883 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:09.883 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:11.784 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:11.784 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:11.784 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:11.784 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:11.784 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.784 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:11.784 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.784 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:25:13.157 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:13.157 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:13.157 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.157 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:13.157 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:15.056 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:15.056 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:15.056 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:15.056 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:15.056 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.056 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:15.056 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.056 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:25:15.990 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:15.990 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:15.990 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.990 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:15.990 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:17.891 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:17.891 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:17.891 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:17.891 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:17.891 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.891 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:17.891 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.891 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:25:18.827 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:18.827 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:18.827 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.827 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:18.827 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:21.357 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:21.357 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:21.357 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:21.357 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:21.357 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.357 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:21.357 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.357 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:25:21.922 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:21.922 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:21.922 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.922 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:21.922 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:23.819 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:23.819 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:23.819 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:23.819 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:23.819 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.819 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:23.819 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.819 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:25:25.193 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:25.193 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:25.193 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.193 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:25.193 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:27.090 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:27.090 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:27.090 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:27.090 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:27.090 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.090 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:27.090 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.090 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:25:28.052 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:28.052 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:28.052 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.052 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:28.052 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:29.952 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:29.952 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:29.952 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:29.952 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:29.952 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.952 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:29.952 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.952 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:25:30.888 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:30.888 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:30.888 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.888 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:30.888 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:33.420 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:33.420 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:33.420 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:33.420 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:33.420 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.420 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:33.420 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.420 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:25:33.988 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:33.988 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:33.988 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:33.988 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:33.988 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:35.895 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:35.895 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:35.895 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:35.895 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:35.895 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.895 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:35.895 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.895 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:25:37.275 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:37.275 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:37.275 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:37.275 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:37.275 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:39.213 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:39.213 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:39.213 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:39.213 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:39.213 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.213 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:39.213 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.213 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:25:39.876 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:39.876 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:39.876 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.876 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:39.876 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:42.419 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:42.419 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:42.419 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:42.419 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:42.419 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.419 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:42.419 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:42.419 [global] 00:25:42.419 thread=1 00:25:42.419 invalidate=1 00:25:42.419 rw=read 00:25:42.419 time_based=1 00:25:42.419 runtime=10 00:25:42.419 ioengine=libaio 00:25:42.419 direct=1 00:25:42.419 bs=262144 00:25:42.419 iodepth=64 00:25:42.419 norandommap=1 00:25:42.419 numjobs=1 00:25:42.419 00:25:42.419 [job0] 00:25:42.419 filename=/dev/nvme0n1 00:25:42.419 [job1] 00:25:42.419 filename=/dev/nvme10n1 00:25:42.419 [job2] 00:25:42.419 filename=/dev/nvme1n1 00:25:42.419 [job3] 00:25:42.419 filename=/dev/nvme2n1 00:25:42.419 [job4] 00:25:42.419 filename=/dev/nvme3n1 00:25:42.419 [job5] 00:25:42.419 filename=/dev/nvme4n1 00:25:42.419 [job6] 00:25:42.419 filename=/dev/nvme5n1 00:25:42.419 [job7] 00:25:42.419 filename=/dev/nvme6n1 00:25:42.419 [job8] 00:25:42.419 filename=/dev/nvme7n1 00:25:42.419 [job9] 00:25:42.419 filename=/dev/nvme8n1 00:25:42.419 [job10] 00:25:42.419 filename=/dev/nvme9n1 00:25:42.419 Could not set queue depth (nvme0n1) 00:25:42.419 Could not set queue depth (nvme10n1) 00:25:42.419 Could not set queue depth (nvme1n1) 00:25:42.419 Could not set queue depth (nvme2n1) 00:25:42.419 Could not set queue depth (nvme3n1) 00:25:42.419 Could not set queue depth (nvme4n1) 00:25:42.419 Could not set queue depth (nvme5n1) 00:25:42.419 Could not set queue depth (nvme6n1) 00:25:42.419 Could not set queue depth (nvme7n1) 00:25:42.419 Could not set queue depth (nvme8n1) 00:25:42.419 Could not set queue depth (nvme9n1) 00:25:42.678 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.678 fio-3.35 00:25:42.678 Starting 11 threads 00:25:54.892 00:25:54.892 job0: (groupid=0, jobs=1): err= 0: pid=1769557: Thu Nov 28 10:58:31 2024 00:25:54.892 read: IOPS=1533, BW=383MiB/s (402MB/s)(3853MiB/10050msec) 00:25:54.892 slat (usec): min=12, max=22740, avg=644.81, stdev=1742.75 00:25:54.892 clat (msec): min=10, max=118, avg=41.05, stdev= 9.10 00:25:54.892 lat (msec): min=10, max=118, avg=41.70, stdev= 9.33 00:25:54.892 clat percentiles (msec): 00:25:54.892 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:25:54.892 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 38], 00:25:54.892 | 70.00th=[ 41], 80.00th=[ 53], 90.00th=[ 55], 95.00th=[ 56], 00:25:54.892 | 99.00th=[ 66], 99.50th=[ 70], 99.90th=[ 105], 99.95th=[ 115], 00:25:54.892 | 99.99th=[ 118] 00:25:54.892 bw ( KiB/s): min=286208, max=457728, per=10.78%, avg=392883.20, stdev=64222.69, samples=20 00:25:54.892 iops : min= 1118, max= 1788, avg=1534.70, stdev=250.87, samples=20 00:25:54.893 lat (msec) : 20=0.38%, 50=72.70%, 100=26.78%, 250=0.14% 00:25:54.893 cpu : usr=0.65%, sys=6.55%, ctx=2775, majf=0, minf=4097 00:25:54.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:54.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.893 issued rwts: total=15410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.893 job1: (groupid=0, jobs=1): err= 0: pid=1769574: Thu Nov 28 10:58:31 2024 00:25:54.893 read: IOPS=2016, BW=504MiB/s (529MB/s)(5058MiB/10034msec) 00:25:54.893 slat (usec): min=12, max=16184, avg=490.77, stdev=1219.93 00:25:54.893 clat (usec): min=11308, max=64507, avg=31219.25, stdev=10249.95 00:25:54.893 lat (usec): min=11569, max=71432, avg=31710.02, stdev=10437.86 00:25:54.893 clat percentiles (usec): 00:25:54.893 | 1.00th=[14746], 5.00th=[16057], 10.00th=[16581], 20.00th=[17171], 00:25:54.893 | 30.00th=[30540], 40.00th=[34341], 50.00th=[34866], 60.00th=[35914], 00:25:54.893 | 70.00th=[35914], 80.00th=[36963], 90.00th=[39060], 95.00th=[49021], 00:25:54.893 | 99.00th=[55313], 99.50th=[57410], 99.90th=[58983], 99.95th=[59507], 00:25:54.893 | 99.99th=[62129] 00:25:54.893 bw ( KiB/s): min=298496, max=974336, per=14.16%, avg=516300.80, stdev=187564.16, samples=20 00:25:54.893 iops : min= 1166, max= 3806, avg=2016.80, stdev=732.67, samples=20 00:25:54.893 lat (msec) : 20=28.98%, 50=66.22%, 100=4.81% 00:25:54.893 cpu : usr=0.74%, sys=7.76%, ctx=3567, majf=0, minf=3660 00:25:54.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:25:54.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.893 issued rwts: total=20231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.893 job2: (groupid=0, jobs=1): err= 0: pid=1769592: Thu Nov 28 10:58:31 2024 00:25:54.893 read: IOPS=938, BW=235MiB/s (246MB/s)(2357MiB/10050msec) 00:25:54.893 slat (usec): min=17, max=20858, avg=1056.03, stdev=2606.25 00:25:54.893 clat (msec): min=13, max=103, avg=67.10, stdev=15.84 00:25:54.893 lat (msec): min=13, max=110, avg=68.15, stdev=16.22 00:25:54.893 clat percentiles (msec): 00:25:54.893 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 54], 00:25:54.893 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:25:54.893 | 70.00th=[ 74], 80.00th=[ 77], 90.00th=[ 89], 95.00th=[ 91], 00:25:54.893 | 99.00th=[ 95], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 103], 00:25:54.893 | 99.99th=[ 104] 00:25:54.893 bw ( KiB/s): min=171008, max=440832, per=6.58%, avg=239744.00, stdev=62379.10, samples=20 00:25:54.893 iops : min= 668, max= 1722, avg=936.50, stdev=243.67, samples=20 00:25:54.893 lat (msec) : 20=0.16%, 50=14.04%, 100=85.56%, 250=0.23% 00:25:54.893 cpu : usr=0.58%, sys=4.57%, ctx=1793, majf=0, minf=4097 00:25:54.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:54.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.893 issued rwts: total=9428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.893 job3: (groupid=0, jobs=1): err= 0: pid=1769601: Thu Nov 28 10:58:31 2024 00:25:54.893 read: IOPS=1283, BW=321MiB/s (336MB/s)(3224MiB/10045msec) 00:25:54.893 slat (usec): min=11, max=16832, avg=750.01, stdev=1868.53 00:25:54.893 clat (usec): min=12498, max=83965, avg=49058.87, stdev=11772.83 00:25:54.893 lat (usec): min=12756, max=85063, avg=49808.88, stdev=12052.57 00:25:54.893 clat percentiles (usec): 00:25:54.893 | 1.00th=[25560], 5.00th=[32113], 10.00th=[32900], 20.00th=[34341], 00:25:54.893 | 30.00th=[47449], 40.00th=[49021], 50.00th=[50070], 60.00th=[51119], 00:25:54.893 | 70.00th=[52691], 80.00th=[56886], 90.00th=[66847], 95.00th=[67634], 00:25:54.893 | 99.00th=[71828], 99.50th=[74974], 99.90th=[80217], 99.95th=[83362], 00:25:54.893 | 99.99th=[84411] 00:25:54.893 bw ( KiB/s): min=233984, max=481280, per=9.01%, avg=328448.00, stdev=71053.14, samples=20 00:25:54.893 iops : min= 914, max= 1880, avg=1283.00, stdev=277.55, samples=20 00:25:54.893 lat (msec) : 20=0.46%, 50=48.88%, 100=50.66% 00:25:54.893 cpu : usr=0.62%, sys=5.31%, ctx=2673, majf=0, minf=4097 00:25:54.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:25:54.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.893 issued rwts: total=12894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.893 job4: (groupid=0, jobs=1): err= 0: pid=1769606: Thu Nov 28 10:58:31 2024 00:25:54.893 read: IOPS=884, BW=221MiB/s (232MB/s)(2220MiB/10045msec) 00:25:54.893 slat (usec): min=17, max=27000, avg=1112.20, stdev=2672.49 00:25:54.893 clat (msec): min=13, max=115, avg=71.21, stdev=11.00 00:25:54.893 lat (msec): min=13, max=115, avg=72.32, stdev=11.38 00:25:54.893 clat percentiles (msec): 00:25:54.893 | 1.00th=[ 49], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 67], 00:25:54.893 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:25:54.893 | 70.00th=[ 74], 80.00th=[ 78], 90.00th=[ 89], 95.00th=[ 91], 00:25:54.893 | 99.00th=[ 95], 99.50th=[ 99], 99.90th=[ 103], 99.95th=[ 105], 00:25:54.893 | 99.99th=[ 116] 00:25:54.893 bw ( KiB/s): min=173568, max=313856, per=6.19%, avg=225715.20, stdev=31807.51, samples=20 00:25:54.893 iops : min= 678, max= 1226, avg=881.70, stdev=124.25, samples=20 00:25:54.893 lat (msec) : 20=0.16%, 50=5.35%, 100=94.26%, 250=0.24% 00:25:54.893 cpu : usr=0.55%, sys=4.36%, ctx=1721, majf=0, minf=4097 00:25:54.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:54.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.893 issued rwts: total=8880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.893 job5: (groupid=0, jobs=1): err= 0: pid=1769626: Thu Nov 28 10:58:31 2024 00:25:54.893 read: IOPS=1499, BW=375MiB/s (393MB/s)(3766MiB/10049msec) 00:25:54.893 slat (usec): min=10, max=43171, avg=645.84, stdev=1937.09 00:25:54.893 clat (msec): min=12, max=131, avg=42.00, stdev=13.32 00:25:54.893 lat (msec): min=13, max=132, avg=42.64, stdev=13.61 00:25:54.893 clat percentiles (msec): 00:25:54.893 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:25:54.893 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 37], 00:25:54.893 | 70.00th=[ 39], 80.00th=[ 52], 90.00th=[ 58], 95.00th=[ 73], 00:25:54.893 | 99.00th=[ 91], 99.50th=[ 91], 99.90th=[ 105], 99.95th=[ 110], 00:25:54.893 | 99.99th=[ 132] 00:25:54.893 bw ( KiB/s): min=192000, max=457728, per=10.53%, avg=384051.20, stdev=88021.67, samples=20 00:25:54.893 iops : min= 750, max= 1788, avg=1500.20, stdev=343.83, samples=20 00:25:54.893 lat (msec) : 20=0.25%, 50=78.54%, 100=21.09%, 250=0.12% 00:25:54.893 cpu : usr=0.37%, sys=5.26%, ctx=2989, majf=0, minf=4097 00:25:54.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:54.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.893 issued rwts: total=15065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.893 job6: (groupid=0, jobs=1): err= 0: pid=1769636: Thu Nov 28 10:58:31 2024 00:25:54.893 read: IOPS=953, BW=238MiB/s (250MB/s)(2393MiB/10034msec) 00:25:54.893 slat (usec): min=12, max=25812, avg=1020.18, stdev=2598.21 00:25:54.893 clat (msec): min=13, max=111, avg=66.01, stdev=16.82 00:25:54.894 lat (msec): min=13, max=112, avg=67.03, stdev=17.24 00:25:54.894 clat percentiles (msec): 00:25:54.894 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 53], 00:25:54.894 | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:25:54.894 | 70.00th=[ 74], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 91], 00:25:54.894 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 104], 99.95th=[ 109], 00:25:54.894 | 99.99th=[ 112] 00:25:54.894 bw ( KiB/s): min=174080, max=441344, per=6.68%, avg=243379.20, stdev=69871.40, samples=20 00:25:54.894 iops : min= 680, max= 1724, avg=950.70, stdev=272.94, samples=20 00:25:54.894 lat (msec) : 20=0.36%, 50=16.09%, 100=83.39%, 250=0.17% 00:25:54.894 cpu : usr=0.36%, sys=4.65%, ctx=1959, majf=0, minf=4097 00:25:54.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.894 issued rwts: total=9570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.894 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.894 job7: (groupid=0, jobs=1): err= 0: pid=1769643: Thu Nov 28 10:58:31 2024 00:25:54.894 read: IOPS=1107, BW=277MiB/s (290MB/s)(2782MiB/10044msec) 00:25:54.894 slat (usec): min=11, max=18048, avg=854.77, stdev=2170.17 00:25:54.894 clat (msec): min=14, max=107, avg=56.86, stdev=11.86 00:25:54.894 lat (msec): min=14, max=108, avg=57.71, stdev=12.19 00:25:54.894 clat percentiles (msec): 00:25:54.894 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:25:54.894 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 54], 00:25:54.894 | 70.00th=[ 58], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 90], 00:25:54.894 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 102], 00:25:54.894 | 99.99th=[ 108] 00:25:54.894 bw ( KiB/s): min=173915, max=329216, per=7.77%, avg=283255.75, stdev=47184.08, samples=20 00:25:54.894 iops : min= 679, max= 1286, avg=1106.45, stdev=184.36, samples=20 00:25:54.894 lat (msec) : 20=0.18%, 50=27.58%, 100=71.99%, 250=0.25% 00:25:54.894 cpu : usr=0.38%, sys=4.51%, ctx=2469, majf=0, minf=4097 00:25:54.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.894 issued rwts: total=11127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.894 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.894 job8: (groupid=0, jobs=1): err= 0: pid=1769660: Thu Nov 28 10:58:31 2024 00:25:54.894 read: IOPS=898, BW=225MiB/s (235MB/s)(2256MiB/10049msec) 00:25:54.894 slat (usec): min=11, max=24841, avg=1060.14, stdev=2810.80 00:25:54.894 clat (msec): min=14, max=112, avg=70.12, stdev=10.96 00:25:54.894 lat (msec): min=15, max=112, avg=71.18, stdev=11.42 00:25:54.894 clat percentiles (msec): 00:25:54.894 | 1.00th=[ 42], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 66], 00:25:54.894 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 71], 60.00th=[ 72], 00:25:54.894 | 70.00th=[ 73], 80.00th=[ 75], 90.00th=[ 87], 95.00th=[ 91], 00:25:54.894 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 105], 99.95th=[ 108], 00:25:54.894 | 99.99th=[ 113] 00:25:54.894 bw ( KiB/s): min=169984, max=289792, per=6.29%, avg=229427.20, stdev=30332.16, samples=20 00:25:54.894 iops : min= 664, max= 1132, avg=896.20, stdev=118.48, samples=20 00:25:54.894 lat (msec) : 20=0.13%, 50=3.28%, 100=96.17%, 250=0.42% 00:25:54.894 cpu : usr=0.40%, sys=3.50%, ctx=1989, majf=0, minf=4097 00:25:54.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.894 issued rwts: total=9025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.894 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.894 job9: (groupid=0, jobs=1): err= 0: pid=1769672: Thu Nov 28 10:58:31 2024 00:25:54.894 read: IOPS=1602, BW=401MiB/s (420MB/s)(4028MiB/10050msec) 00:25:54.894 slat (usec): min=11, max=15618, avg=613.27, stdev=1602.17 00:25:54.894 clat (msec): min=4, max=105, avg=39.27, stdev=16.99 00:25:54.894 lat (msec): min=4, max=106, avg=39.89, stdev=17.29 00:25:54.894 clat percentiles (msec): 00:25:54.894 | 1.00th=[ 16], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 18], 00:25:54.894 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 37], 60.00th=[ 50], 00:25:54.894 | 70.00th=[ 51], 80.00th=[ 52], 90.00th=[ 57], 95.00th=[ 68], 00:25:54.894 | 99.00th=[ 87], 99.50th=[ 90], 99.90th=[ 94], 99.95th=[ 102], 00:25:54.894 | 99.99th=[ 106] 00:25:54.894 bw ( KiB/s): min=207360, max=972288, per=11.27%, avg=410893.35, stdev=198043.91, samples=20 00:25:54.894 iops : min= 810, max= 3798, avg=1605.05, stdev=773.60, samples=20 00:25:54.894 lat (msec) : 10=0.29%, 20=24.21%, 50=44.10%, 100=31.33%, 250=0.06% 00:25:54.894 cpu : usr=0.57%, sys=6.34%, ctx=2938, majf=0, minf=4097 00:25:54.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.894 issued rwts: total=16110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.894 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.894 job10: (groupid=0, jobs=1): err= 0: pid=1769680: Thu Nov 28 10:58:31 2024 00:25:54.894 read: IOPS=1530, BW=383MiB/s (401MB/s)(3845MiB/10047msec) 00:25:54.894 slat (usec): min=11, max=17676, avg=643.01, stdev=1602.04 00:25:54.894 clat (msec): min=10, max=122, avg=41.12, stdev= 9.12 00:25:54.894 lat (msec): min=11, max=122, avg=41.77, stdev= 9.32 00:25:54.894 clat percentiles (msec): 00:25:54.894 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:25:54.894 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 38], 00:25:54.894 | 70.00th=[ 42], 80.00th=[ 53], 90.00th=[ 55], 95.00th=[ 56], 00:25:54.894 | 99.00th=[ 65], 99.50th=[ 71], 99.90th=[ 107], 99.95th=[ 110], 00:25:54.894 | 99.99th=[ 115] 00:25:54.894 bw ( KiB/s): min=283648, max=457216, per=10.76%, avg=392099.75, stdev=63665.47, samples=20 00:25:54.894 iops : min= 1108, max= 1786, avg=1531.60, stdev=248.72, samples=20 00:25:54.894 lat (msec) : 20=0.30%, 50=73.27%, 100=26.27%, 250=0.16% 00:25:54.894 cpu : usr=0.74%, sys=6.22%, ctx=2775, majf=0, minf=4097 00:25:54.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.894 issued rwts: total=15378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.894 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.894 00:25:54.894 Run status group 0 (all jobs): 00:25:54.894 READ: bw=3560MiB/s (3733MB/s), 221MiB/s-504MiB/s (232MB/s-529MB/s), io=34.9GiB (37.5GB), run=10034-10050msec 00:25:54.894 00:25:54.894 Disk stats (read/write): 00:25:54.894 nvme0n1: ios=30485/0, merge=0/0, ticks=1222009/0, in_queue=1222009, util=96.80% 00:25:54.894 nvme10n1: ios=39796/0, merge=0/0, ticks=1222310/0, in_queue=1222310, util=97.05% 00:25:54.894 nvme1n1: ios=18483/0, merge=0/0, ticks=1222731/0, in_queue=1222731, util=97.37% 00:25:54.894 nvme2n1: ios=25375/0, merge=0/0, ticks=1222093/0, in_queue=1222093, util=97.55% 00:25:54.894 nvme3n1: ios=17374/0, merge=0/0, ticks=1223944/0, in_queue=1223944, util=97.65% 00:25:54.894 nvme4n1: ios=29777/0, merge=0/0, ticks=1220283/0, in_queue=1220283, util=98.07% 00:25:54.894 nvme5n1: ios=18661/0, merge=0/0, ticks=1224998/0, in_queue=1224998, util=98.26% 00:25:54.894 nvme6n1: ios=21857/0, merge=0/0, ticks=1222328/0, in_queue=1222328, util=98.39% 00:25:54.894 nvme7n1: ios=17682/0, merge=0/0, ticks=1223301/0, in_queue=1223301, util=98.87% 00:25:54.894 nvme8n1: ios=31844/0, merge=0/0, ticks=1222001/0, in_queue=1222001, util=99.11% 00:25:54.894 nvme9n1: ios=30427/0, merge=0/0, ticks=1218663/0, in_queue=1218663, util=99.27% 00:25:54.895 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:54.895 [global] 00:25:54.895 thread=1 00:25:54.895 invalidate=1 00:25:54.895 rw=randwrite 00:25:54.895 time_based=1 00:25:54.895 runtime=10 00:25:54.895 ioengine=libaio 00:25:54.895 direct=1 00:25:54.895 bs=262144 00:25:54.895 iodepth=64 00:25:54.895 norandommap=1 00:25:54.895 numjobs=1 00:25:54.895 00:25:54.895 [job0] 00:25:54.895 filename=/dev/nvme0n1 00:25:54.895 [job1] 00:25:54.895 filename=/dev/nvme10n1 00:25:54.895 [job2] 00:25:54.895 filename=/dev/nvme1n1 00:25:54.895 [job3] 00:25:54.895 filename=/dev/nvme2n1 00:25:54.895 [job4] 00:25:54.895 filename=/dev/nvme3n1 00:25:54.895 [job5] 00:25:54.895 filename=/dev/nvme4n1 00:25:54.895 [job6] 00:25:54.895 filename=/dev/nvme5n1 00:25:54.895 [job7] 00:25:54.895 filename=/dev/nvme6n1 00:25:54.895 [job8] 00:25:54.895 filename=/dev/nvme7n1 00:25:54.895 [job9] 00:25:54.895 filename=/dev/nvme8n1 00:25:54.895 [job10] 00:25:54.895 filename=/dev/nvme9n1 00:25:54.895 Could not set queue depth (nvme0n1) 00:25:54.895 Could not set queue depth (nvme10n1) 00:25:54.895 Could not set queue depth (nvme1n1) 00:25:54.895 Could not set queue depth (nvme2n1) 00:25:54.895 Could not set queue depth (nvme3n1) 00:25:54.895 Could not set queue depth (nvme4n1) 00:25:54.895 Could not set queue depth (nvme5n1) 00:25:54.895 Could not set queue depth (nvme6n1) 00:25:54.895 Could not set queue depth (nvme7n1) 00:25:54.895 Could not set queue depth (nvme8n1) 00:25:54.895 Could not set queue depth (nvme9n1) 00:25:54.895 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.895 fio-3.35 00:25:54.895 Starting 11 threads 00:26:04.875 00:26:04.875 job0: (groupid=0, jobs=1): err= 0: pid=1771258: Thu Nov 28 10:58:42 2024 00:26:04.875 write: IOPS=852, BW=213MiB/s (223MB/s)(2146MiB/10072msec); 0 zone resets 00:26:04.875 slat (usec): min=23, max=21475, avg=1102.57, stdev=2222.10 00:26:04.875 clat (msec): min=10, max=170, avg=73.96, stdev=17.34 00:26:04.875 lat (msec): min=10, max=170, avg=75.06, stdev=17.66 00:26:04.875 clat percentiles (msec): 00:26:04.875 | 1.00th=[ 40], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 61], 00:26:04.875 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 75], 60.00th=[ 78], 00:26:04.875 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 99], 95.00th=[ 114], 00:26:04.875 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 150], 99.95th=[ 163], 00:26:04.875 | 99.99th=[ 171] 00:26:04.875 bw ( KiB/s): min=136704, max=265728, per=7.01%, avg=218137.60, stdev=37850.28, samples=20 00:26:04.875 iops : min= 534, max= 1038, avg=852.10, stdev=147.85, samples=20 00:26:04.875 lat (msec) : 20=0.34%, 50=3.25%, 100=88.09%, 250=8.32% 00:26:04.875 cpu : usr=1.93%, sys=3.63%, ctx=2227, majf=0, minf=1 00:26:04.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:04.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.875 issued rwts: total=0,8584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.875 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.875 job1: (groupid=0, jobs=1): err= 0: pid=1771282: Thu Nov 28 10:58:42 2024 00:26:04.875 write: IOPS=852, BW=213MiB/s (223MB/s)(2147MiB/10072msec); 0 zone resets 00:26:04.875 slat (usec): min=24, max=17592, avg=1159.14, stdev=2235.78 00:26:04.875 clat (msec): min=10, max=168, avg=73.89, stdev=16.84 00:26:04.875 lat (msec): min=10, max=168, avg=75.04, stdev=17.10 00:26:04.875 clat percentiles (msec): 00:26:04.875 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:26:04.875 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 73], 60.00th=[ 77], 00:26:04.875 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 99], 95.00th=[ 113], 00:26:04.875 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 157], 99.95th=[ 165], 00:26:04.875 | 99.99th=[ 169] 00:26:04.875 bw ( KiB/s): min=137728, max=265728, per=7.01%, avg=218188.80, stdev=38370.45, samples=20 00:26:04.875 iops : min= 538, max= 1038, avg=852.30, stdev=149.88, samples=20 00:26:04.875 lat (msec) : 20=0.17%, 50=0.43%, 100=91.31%, 250=8.08% 00:26:04.875 cpu : usr=1.97%, sys=3.98%, ctx=2114, majf=0, minf=1 00:26:04.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:04.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.875 issued rwts: total=0,8586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.875 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.875 job2: (groupid=0, jobs=1): err= 0: pid=1771299: Thu Nov 28 10:58:42 2024 00:26:04.875 write: IOPS=1109, BW=277MiB/s (291MB/s)(2782MiB/10032msec); 0 zone resets 00:26:04.875 slat (usec): min=22, max=29908, avg=870.56, stdev=1887.88 00:26:04.875 clat (msec): min=3, max=148, avg=56.82, stdev=20.76 00:26:04.875 lat (msec): min=4, max=148, avg=57.69, stdev=21.08 00:26:04.875 clat percentiles (msec): 00:26:04.875 | 1.00th=[ 18], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 41], 00:26:04.875 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 46], 60.00th=[ 61], 00:26:04.875 | 70.00th=[ 69], 80.00th=[ 78], 90.00th=[ 81], 95.00th=[ 85], 00:26:04.875 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 130], 99.95th=[ 144], 00:26:04.875 | 99.99th=[ 146] 00:26:04.875 bw ( KiB/s): min=134144, max=408576, per=9.10%, avg=283212.80, stdev=84164.99, samples=20 00:26:04.875 iops : min= 524, max= 1596, avg=1106.30, stdev=328.77, samples=20 00:26:04.875 lat (msec) : 4=0.01%, 10=0.32%, 20=0.95%, 50=49.69%, 100=45.41% 00:26:04.875 lat (msec) : 250=3.61% 00:26:04.875 cpu : usr=2.42%, sys=3.90%, ctx=2775, majf=0, minf=1 00:26:04.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:04.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.875 issued rwts: total=0,11126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.875 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.875 job3: (groupid=0, jobs=1): err= 0: pid=1771310: Thu Nov 28 10:58:42 2024 00:26:04.875 write: IOPS=854, BW=214MiB/s (224MB/s)(2153MiB/10072msec); 0 zone resets 00:26:04.875 slat (usec): min=29, max=13912, avg=1157.03, stdev=2224.39 00:26:04.875 clat (msec): min=5, max=165, avg=73.68, stdev=16.56 00:26:04.875 lat (msec): min=5, max=168, avg=74.83, stdev=16.84 00:26:04.875 clat percentiles (msec): 00:26:04.875 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:26:04.875 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 73], 60.00th=[ 77], 00:26:04.875 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 97], 95.00th=[ 113], 00:26:04.875 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 157], 99.95th=[ 157], 00:26:04.875 | 99.99th=[ 165] 00:26:04.875 bw ( KiB/s): min=137216, max=268288, per=7.03%, avg=218828.80, stdev=38201.12, samples=20 00:26:04.875 iops : min= 536, max= 1048, avg=854.80, stdev=149.22, samples=20 00:26:04.875 lat (msec) : 10=0.05%, 20=0.14%, 50=0.42%, 100=91.94%, 250=7.46% 00:26:04.875 cpu : usr=2.22%, sys=3.37%, ctx=2019, majf=0, minf=1 00:26:04.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:04.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.875 issued rwts: total=0,8611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.875 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.875 job4: (groupid=0, jobs=1): err= 0: pid=1771315: Thu Nov 28 10:58:42 2024 00:26:04.875 write: IOPS=1360, BW=340MiB/s (357MB/s)(3406MiB/10017msec); 0 zone resets 00:26:04.875 slat (usec): min=16, max=14711, avg=692.62, stdev=1595.20 00:26:04.875 clat (usec): min=339, max=92682, avg=46344.49, stdev=25634.27 00:26:04.875 lat (usec): min=417, max=94933, avg=47037.12, stdev=26028.59 00:26:04.875 clat percentiles (usec): 00:26:04.875 | 1.00th=[ 8586], 5.00th=[17695], 10.00th=[18482], 20.00th=[19268], 00:26:04.875 | 30.00th=[19792], 40.00th=[21103], 50.00th=[52691], 60.00th=[59507], 00:26:04.875 | 70.00th=[63701], 80.00th=[76022], 90.00th=[79168], 95.00th=[81265], 00:26:04.875 | 99.00th=[85459], 99.50th=[87557], 99.90th=[89654], 99.95th=[90702], 00:26:04.875 | 99.99th=[91751] 00:26:04.875 bw ( KiB/s): min=199680, max=840704, per=11.16%, avg=347187.20, stdev=220345.23, samples=20 00:26:04.875 iops : min= 780, max= 3284, avg=1356.20, stdev=860.72, samples=20 00:26:04.875 lat (usec) : 500=0.01%, 1000=0.02% 00:26:04.875 lat (msec) : 2=0.14%, 4=0.21%, 10=0.85%, 20=30.64%, 50=17.78% 00:26:04.875 lat (msec) : 100=50.35% 00:26:04.876 cpu : usr=2.75%, sys=4.43%, ctx=3297, majf=0, minf=1 00:26:04.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:04.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.876 issued rwts: total=0,13625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.876 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.876 job5: (groupid=0, jobs=1): err= 0: pid=1771335: Thu Nov 28 10:58:42 2024 00:26:04.876 write: IOPS=852, BW=213MiB/s (223MB/s)(2146MiB/10068msec); 0 zone resets 00:26:04.876 slat (usec): min=27, max=17074, avg=1159.73, stdev=2230.90 00:26:04.876 clat (msec): min=17, max=170, avg=73.89, stdev=16.56 00:26:04.876 lat (msec): min=17, max=170, avg=75.05, stdev=16.82 00:26:04.876 clat percentiles (msec): 00:26:04.876 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:26:04.876 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 73], 60.00th=[ 77], 00:26:04.876 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 99], 95.00th=[ 114], 00:26:04.876 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 159], 99.95th=[ 163], 00:26:04.876 | 99.99th=[ 171] 00:26:04.876 bw ( KiB/s): min=138240, max=267776, per=7.01%, avg=218112.20, stdev=37783.57, samples=20 00:26:04.876 iops : min= 540, max= 1046, avg=852.00, stdev=147.59, samples=20 00:26:04.876 lat (msec) : 20=0.06%, 50=0.31%, 100=91.77%, 250=7.85% 00:26:04.876 cpu : usr=2.21%, sys=3.67%, ctx=2108, majf=0, minf=1 00:26:04.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:04.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.876 issued rwts: total=0,8582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.876 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.876 job6: (groupid=0, jobs=1): err= 0: pid=1771345: Thu Nov 28 10:58:42 2024 00:26:04.876 write: IOPS=1317, BW=329MiB/s (345MB/s)(3305MiB/10030msec); 0 zone resets 00:26:04.876 slat (usec): min=18, max=14138, avg=751.93, stdev=1486.24 00:26:04.876 clat (usec): min=14923, max=93262, avg=47793.44, stdev=19030.56 00:26:04.876 lat (usec): min=14975, max=93437, avg=48545.37, stdev=19317.87 00:26:04.876 clat percentiles (usec): 00:26:04.876 | 1.00th=[18220], 5.00th=[19006], 10.00th=[20055], 20.00th=[38011], 00:26:04.876 | 30.00th=[40109], 40.00th=[41157], 50.00th=[41681], 60.00th=[42730], 00:26:04.876 | 70.00th=[57934], 80.00th=[64750], 90.00th=[78119], 95.00th=[81265], 00:26:04.876 | 99.00th=[85459], 99.50th=[86508], 99.90th=[88605], 99.95th=[89654], 00:26:04.876 | 99.99th=[90702] 00:26:04.876 bw ( KiB/s): min=198144, max=789504, per=10.82%, avg=336793.60, stdev=140545.59, samples=20 00:26:04.876 iops : min= 774, max= 3084, avg=1315.60, stdev=549.01, samples=20 00:26:04.876 lat (msec) : 20=9.99%, 50=54.65%, 100=35.37% 00:26:04.876 cpu : usr=2.83%, sys=5.04%, ctx=3149, majf=0, minf=1 00:26:04.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:04.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.876 issued rwts: total=0,13219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.876 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.876 job7: (groupid=0, jobs=1): err= 0: pid=1771353: Thu Nov 28 10:58:42 2024 00:26:04.876 write: IOPS=1172, BW=293MiB/s (307MB/s)(2945MiB/10048msec); 0 zone resets 00:26:04.876 slat (usec): min=21, max=20689, avg=836.55, stdev=1577.23 00:26:04.876 clat (msec): min=11, max=105, avg=53.74, stdev=14.42 00:26:04.876 lat (msec): min=12, max=105, avg=54.58, stdev=14.62 00:26:04.876 clat percentiles (msec): 00:26:04.876 | 1.00th=[ 20], 5.00th=[ 30], 10.00th=[ 40], 20.00th=[ 41], 00:26:04.876 | 30.00th=[ 43], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 61], 00:26:04.876 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 79], 00:26:04.876 | 99.00th=[ 84], 99.50th=[ 86], 99.90th=[ 97], 99.95th=[ 105], 00:26:04.876 | 99.99th=[ 106] 00:26:04.876 bw ( KiB/s): min=212992, max=515072, per=9.64%, avg=299904.00, stdev=74247.69, samples=20 00:26:04.876 iops : min= 832, max= 2012, avg=1171.50, stdev=290.03, samples=20 00:26:04.876 lat (msec) : 20=1.51%, 50=38.67%, 100=59.77%, 250=0.05% 00:26:04.876 cpu : usr=2.72%, sys=4.68%, ctx=2919, majf=0, minf=1 00:26:04.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:04.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.876 issued rwts: total=0,11778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.876 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.876 job8: (groupid=0, jobs=1): err= 0: pid=1771377: Thu Nov 28 10:58:42 2024 00:26:04.876 write: IOPS=1287, BW=322MiB/s (337MB/s)(3233MiB/10048msec); 0 zone resets 00:26:04.876 slat (usec): min=22, max=10045, avg=759.00, stdev=1401.27 00:26:04.876 clat (msec): min=8, max=105, avg=48.94, stdev=11.20 00:26:04.876 lat (msec): min=8, max=105, avg=49.70, stdev=11.34 00:26:04.876 clat percentiles (msec): 00:26:04.876 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 41], 00:26:04.876 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 53], 00:26:04.876 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 63], 95.00th=[ 67], 00:26:04.876 | 99.00th=[ 77], 99.50th=[ 81], 99.90th=[ 93], 99.95th=[ 101], 00:26:04.876 | 99.99th=[ 106] 00:26:04.876 bw ( KiB/s): min=250368, max=419328, per=10.59%, avg=329472.00, stdev=63661.07, samples=20 00:26:04.876 iops : min= 978, max= 1638, avg=1287.00, stdev=248.68, samples=20 00:26:04.876 lat (msec) : 10=0.02%, 20=0.29%, 50=59.12%, 100=40.51%, 250=0.06% 00:26:04.876 cpu : usr=2.80%, sys=5.12%, ctx=3180, majf=0, minf=1 00:26:04.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:04.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.876 issued rwts: total=0,12933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.876 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.876 job9: (groupid=0, jobs=1): err= 0: pid=1771389: Thu Nov 28 10:58:42 2024 00:26:04.876 write: IOPS=1526, BW=382MiB/s (400MB/s)(3845MiB/10072msec); 0 zone resets 00:26:04.876 slat (usec): min=17, max=42242, avg=639.28, stdev=1464.05 00:26:04.876 clat (msec): min=12, max=165, avg=41.26, stdev=24.25 00:26:04.876 lat (msec): min=12, max=173, avg=41.90, stdev=24.60 00:26:04.876 clat percentiles (msec): 00:26:04.876 | 1.00th=[ 19], 5.00th=[ 20], 10.00th=[ 20], 20.00th=[ 21], 00:26:04.876 | 30.00th=[ 21], 40.00th=[ 23], 50.00th=[ 40], 60.00th=[ 41], 00:26:04.876 | 70.00th=[ 44], 80.00th=[ 59], 90.00th=[ 69], 95.00th=[ 99], 00:26:04.876 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 148], 99.95th=[ 159], 00:26:04.876 | 99.99th=[ 165] 00:26:04.876 bw ( KiB/s): min=136192, max=812032, per=12.60%, avg=392089.60, stdev=217078.63, samples=20 00:26:04.876 iops : min= 532, max= 3172, avg=1531.60, stdev=847.96, samples=20 00:26:04.876 lat (msec) : 20=20.63%, 50=50.32%, 100=24.87%, 250=4.19% 00:26:04.876 cpu : usr=2.59%, sys=5.30%, ctx=3479, majf=0, minf=1 00:26:04.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:04.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.876 issued rwts: total=0,15379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.876 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.876 job10: (groupid=0, jobs=1): err= 0: pid=1771394: Thu Nov 28 10:58:42 2024 00:26:04.876 write: IOPS=997, BW=249MiB/s (261MB/s)(2506MiB/10048msec); 0 zone resets 00:26:04.876 slat (usec): min=22, max=28801, avg=948.95, stdev=1919.21 00:26:04.876 clat (msec): min=15, max=126, avg=63.19, stdev=17.46 00:26:04.876 lat (msec): min=15, max=126, avg=64.14, stdev=17.76 00:26:04.876 clat percentiles (msec): 00:26:04.876 | 1.00th=[ 20], 5.00th=[ 22], 10.00th=[ 41], 20.00th=[ 55], 00:26:04.876 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 66], 00:26:04.876 | 70.00th=[ 77], 80.00th=[ 79], 90.00th=[ 82], 95.00th=[ 85], 00:26:04.876 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 109], 99.95th=[ 125], 00:26:04.876 | 99.99th=[ 127] 00:26:04.876 bw ( KiB/s): min=201216, max=420352, per=8.19%, avg=254950.40, stdev=59508.15, samples=20 00:26:04.876 iops : min= 786, max= 1642, avg=995.90, stdev=232.45, samples=20 00:26:04.876 lat (msec) : 20=2.46%, 50=15.23%, 100=81.56%, 250=0.75% 00:26:04.876 cpu : usr=2.19%, sys=4.15%, ctx=2549, majf=0, minf=1 00:26:04.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:04.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:04.876 issued rwts: total=0,10022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.876 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:04.876 00:26:04.876 Run status group 0 (all jobs): 00:26:04.876 WRITE: bw=3039MiB/s (3187MB/s), 213MiB/s-382MiB/s (223MB/s-400MB/s), io=29.9GiB (32.1GB), run=10017-10072msec 00:26:04.876 00:26:04.876 Disk stats (read/write): 00:26:04.876 nvme0n1: ios=49/16865, merge=0/0, ticks=9/1213352, in_queue=1213361, util=96.60% 00:26:04.876 nvme10n1: ios=0/16876, merge=0/0, ticks=0/1212037, in_queue=1212037, util=96.77% 00:26:04.876 nvme1n1: ios=0/21729, merge=0/0, ticks=0/1218775, in_queue=1218775, util=97.12% 00:26:04.876 nvme2n1: ios=0/16923, merge=0/0, ticks=0/1213914, in_queue=1213914, util=97.28% 00:26:04.876 nvme3n1: ios=0/26257, merge=0/0, ticks=0/1219543, in_queue=1219543, util=97.37% 00:26:04.876 nvme4n1: ios=0/16870, merge=0/0, ticks=0/1212670, in_queue=1212670, util=97.76% 00:26:04.876 nvme5n1: ios=0/25910, merge=0/0, ticks=0/1218968, in_queue=1218968, util=97.95% 00:26:04.876 nvme6n1: ios=0/23155, merge=0/0, ticks=0/1216298, in_queue=1216298, util=98.09% 00:26:04.876 nvme7n1: ios=0/25461, merge=0/0, ticks=0/1216563, in_queue=1216563, util=98.58% 00:26:04.876 nvme8n1: ios=0/30467, merge=0/0, ticks=0/1218006, in_queue=1218006, util=98.80% 00:26:04.876 nvme9n1: ios=0/19642, merge=0/0, ticks=0/1217056, in_queue=1217056, util=98.97% 00:26:04.876 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:04.876 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:04.876 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.876 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:05.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.136 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:06.073 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.074 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:07.450 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:07.450 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:07.450 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:07.450 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:07.450 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:07.450 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:07.450 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:07.451 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:07.451 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:07.451 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.451 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.451 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.451 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.451 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:08.388 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.388 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:09.324 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.324 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:10.259 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.259 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:11.194 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:11.194 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.195 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:12.130 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:12.130 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:12.130 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:12.130 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:12.130 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:12.130 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:12.130 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:12.130 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:12.130 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:12.131 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.131 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.131 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.131 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.131 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:13.066 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:13.066 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:13.066 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:13.066 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:13.066 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:13.324 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:13.324 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:13.324 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:13.324 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:13.324 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.324 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.324 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.324 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.324 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:14.258 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.258 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:15.193 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:15.193 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:15.194 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:15.194 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:15.194 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:15.194 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:15.194 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:15.194 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:15.194 rmmod nvme_rdma 00:26:15.194 rmmod nvme_fabrics 00:26:15.194 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:15.194 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:15.194 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:15.194 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 1763248 ']' 00:26:15.194 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 1763248 00:26:15.194 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 1763248 ']' 00:26:15.194 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 1763248 00:26:15.194 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:15.194 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.194 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1763248 00:26:15.453 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:15.453 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:15.453 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1763248' 00:26:15.453 killing process with pid 1763248 00:26:15.453 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 1763248 00:26:15.453 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 1763248 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:19.648 00:26:19.648 real 1m21.125s 00:26:19.648 user 5m6.610s 00:26:19.648 sys 0m21.178s 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.648 ************************************ 00:26:19.648 END TEST nvmf_multiconnection 00:26:19.648 ************************************ 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:19.648 ************************************ 00:26:19.648 START TEST nvmf_initiator_timeout 00:26:19.648 ************************************ 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:26:19.648 * Looking for test storage... 00:26:19.648 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:19.648 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:19.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.649 --rc genhtml_branch_coverage=1 00:26:19.649 --rc genhtml_function_coverage=1 00:26:19.649 --rc genhtml_legend=1 00:26:19.649 --rc geninfo_all_blocks=1 00:26:19.649 --rc geninfo_unexecuted_blocks=1 00:26:19.649 00:26:19.649 ' 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:19.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.649 --rc genhtml_branch_coverage=1 00:26:19.649 --rc genhtml_function_coverage=1 00:26:19.649 --rc genhtml_legend=1 00:26:19.649 --rc geninfo_all_blocks=1 00:26:19.649 --rc geninfo_unexecuted_blocks=1 00:26:19.649 00:26:19.649 ' 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:19.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.649 --rc genhtml_branch_coverage=1 00:26:19.649 --rc genhtml_function_coverage=1 00:26:19.649 --rc genhtml_legend=1 00:26:19.649 --rc geninfo_all_blocks=1 00:26:19.649 --rc geninfo_unexecuted_blocks=1 00:26:19.649 00:26:19.649 ' 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:19.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.649 --rc genhtml_branch_coverage=1 00:26:19.649 --rc genhtml_function_coverage=1 00:26:19.649 --rc genhtml_legend=1 00:26:19.649 --rc geninfo_all_blocks=1 00:26:19.649 --rc geninfo_unexecuted_blocks=1 00:26:19.649 00:26:19.649 ' 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.649 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:19.650 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.650 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.650 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:19.650 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:19.650 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:19.650 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:27.774 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:27.774 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:27.774 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:27.775 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:27.775 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:27.775 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:27.775 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:27.775 altname enp217s0f0np0 00:26:27.775 altname ens818f0np0 00:26:27.775 inet 192.168.100.8/24 scope global mlx_0_0 00:26:27.775 valid_lft forever preferred_lft forever 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:27.775 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:27.775 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:27.775 altname enp217s0f1np1 00:26:27.775 altname ens818f1np1 00:26:27.775 inet 192.168.100.9/24 scope global mlx_0_1 00:26:27.775 valid_lft forever preferred_lft forever 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:27.775 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:27.776 192.168.100.9' 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:27.776 192.168.100.9' 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1 00:26:27.776 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:27.776 192.168.100.9' 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=1779251 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 1779251 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 1779251 ']' 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.776 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.776 [2024-11-28 10:59:06.140716] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:26:27.776 [2024-11-28 10:59:06.140824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.776 [2024-11-28 10:59:06.297472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:27.776 [2024-11-28 10:59:06.404307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.776 [2024-11-28 10:59:06.404357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.776 [2024-11-28 10:59:06.404370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.776 [2024-11-28 10:59:06.404384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.776 [2024-11-28 10:59:06.404394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.776 [2024-11-28 10:59:06.406875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.776 [2024-11-28 10:59:06.406948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.776 [2024-11-28 10:59:06.406968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.776 [2024-11-28 10:59:06.406978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.344 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.344 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:28.344 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:28.344 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:28.344 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.344 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.344 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:28.344 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:28.344 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.344 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.344 Malloc0 00:26:28.344 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.344 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:28.344 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.344 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.344 Delay0 00:26:28.344 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.344 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:28.344 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.344 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.344 [2024-11-28 10:59:07.112049] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029bc0/0x7f32d6d48940) succeed. 00:26:28.344 [2024-11-28 10:59:07.121848] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029d40/0x7f32d6d04940) succeed. 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.604 [2024-11-28 10:59:07.405495] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.604 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:26:29.542 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:29.542 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:29.542 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.542 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:29.542 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:32.076 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:32.076 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:32.076 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:32.076 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:32.076 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:32.076 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:32.076 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1780076 00:26:32.076 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:32.076 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:32.076 [global] 00:26:32.076 thread=1 00:26:32.076 invalidate=1 00:26:32.076 rw=write 00:26:32.076 time_based=1 00:26:32.076 runtime=60 00:26:32.076 ioengine=libaio 00:26:32.076 direct=1 00:26:32.076 bs=4096 00:26:32.076 iodepth=1 00:26:32.076 norandommap=0 00:26:32.076 numjobs=1 00:26:32.076 00:26:32.076 verify_dump=1 00:26:32.076 verify_backlog=512 00:26:32.077 verify_state_save=0 00:26:32.077 do_verify=1 00:26:32.077 verify=crc32c-intel 00:26:32.077 [job0] 00:26:32.077 filename=/dev/nvme0n1 00:26:32.077 Could not set queue depth (nvme0n1) 00:26:32.077 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:32.077 fio-3.35 00:26:32.077 Starting 1 thread 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.610 true 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.610 true 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.610 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.611 true 00:26:34.611 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.611 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:34.611 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.611 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.611 true 00:26:34.611 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.611 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.902 true 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.902 true 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.902 true 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.902 true 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:37.902 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1780076 00:27:34.140 00:27:34.140 job0: (groupid=0, jobs=1): err= 0: pid=1780212: Thu Nov 28 11:00:10 2024 00:27:34.140 read: IOPS=1152, BW=4608KiB/s (4719kB/s)(270MiB/60000msec) 00:27:34.140 slat (usec): min=4, max=5739, avg= 9.05, stdev=21.84 00:27:34.140 clat (usec): min=78, max=42704k, avg=731.61, stdev=162429.63 00:27:34.140 lat (usec): min=99, max=42704k, avg=740.66, stdev=162429.63 00:27:34.140 clat percentiles (usec): 00:27:34.140 | 1.00th=[ 99], 5.00th=[ 103], 10.00th=[ 104], 20.00th=[ 108], 00:27:34.140 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 116], 00:27:34.140 | 70.00th=[ 118], 80.00th=[ 121], 90.00th=[ 124], 95.00th=[ 127], 00:27:34.140 | 99.00th=[ 135], 99.50th=[ 137], 99.90th=[ 147], 99.95th=[ 165], 00:27:34.140 | 99.99th=[ 306] 00:27:34.140 write: IOPS=1157, BW=4631KiB/s (4742kB/s)(271MiB/60000msec); 0 zone resets 00:27:34.140 slat (usec): min=3, max=329, avg=11.53, stdev= 2.88 00:27:34.140 clat (usec): min=83, max=1076, avg=110.23, stdev= 8.77 00:27:34.140 lat (usec): min=95, max=1089, avg=121.76, stdev= 9.39 00:27:34.140 clat percentiles (usec): 00:27:34.140 | 1.00th=[ 96], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 104], 00:27:34.140 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:27:34.140 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 124], 00:27:34.140 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 143], 99.95th=[ 151], 00:27:34.140 | 99.99th=[ 310] 00:27:34.140 bw ( KiB/s): min= 1552, max=17112, per=100.00%, avg=15447.77, stdev=2822.06, samples=35 00:27:34.140 iops : min= 388, max= 4278, avg=3861.94, stdev=705.51, samples=35 00:27:34.140 lat (usec) : 100=4.52%, 250=95.47%, 500=0.01% 00:27:34.140 lat (msec) : 2=0.01%, >=2000=0.01% 00:27:34.140 cpu : usr=1.77%, sys=3.07%, ctx=138588, majf=0, minf=105 00:27:34.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:34.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.141 issued rwts: total=69120,69461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:34.141 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:34.141 00:27:34.141 Run status group 0 (all jobs): 00:27:34.141 READ: bw=4608KiB/s (4719kB/s), 4608KiB/s-4608KiB/s (4719kB/s-4719kB/s), io=270MiB (283MB), run=60000-60000msec 00:27:34.141 WRITE: bw=4631KiB/s (4742kB/s), 4631KiB/s-4631KiB/s (4742kB/s-4742kB/s), io=271MiB (285MB), run=60000-60000msec 00:27:34.141 00:27:34.141 Disk stats (read/write): 00:27:34.141 nvme0n1: ios=68973/69120, merge=0/0, ticks=7199/7106, in_queue=14305, util=99.72% 00:27:34.141 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:34.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:34.141 nvmf hotplug test: fio successful as expected 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:34.141 rmmod nvme_rdma 00:27:34.141 rmmod nvme_fabrics 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 1779251 ']' 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 1779251 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 1779251 ']' 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 1779251 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.141 11:00:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1779251 00:27:34.141 11:00:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:34.141 11:00:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:34.141 11:00:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1779251' 00:27:34.141 killing process with pid 1779251 00:27:34.141 11:00:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 1779251 00:27:34.141 11:00:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 1779251 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:35.079 00:27:35.079 real 1m16.083s 00:27:35.079 user 4m39.277s 00:27:35.079 sys 0m8.997s 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.079 ************************************ 00:27:35.079 END TEST nvmf_initiator_timeout 00:27:35.079 ************************************ 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:35.079 ************************************ 00:27:35.079 START TEST nvmf_srq_overwhelm 00:27:35.079 ************************************ 00:27:35.079 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:27:35.340 * Looking for test storage... 00:27:35.340 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lcov --version 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:35.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.340 --rc genhtml_branch_coverage=1 00:27:35.340 --rc genhtml_function_coverage=1 00:27:35.340 --rc genhtml_legend=1 00:27:35.340 --rc geninfo_all_blocks=1 00:27:35.340 --rc geninfo_unexecuted_blocks=1 00:27:35.340 00:27:35.340 ' 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:35.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.340 --rc genhtml_branch_coverage=1 00:27:35.340 --rc genhtml_function_coverage=1 00:27:35.340 --rc genhtml_legend=1 00:27:35.340 --rc geninfo_all_blocks=1 00:27:35.340 --rc geninfo_unexecuted_blocks=1 00:27:35.340 00:27:35.340 ' 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:35.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.340 --rc genhtml_branch_coverage=1 00:27:35.340 --rc genhtml_function_coverage=1 00:27:35.340 --rc genhtml_legend=1 00:27:35.340 --rc geninfo_all_blocks=1 00:27:35.340 --rc geninfo_unexecuted_blocks=1 00:27:35.340 00:27:35.340 ' 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:35.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.340 --rc genhtml_branch_coverage=1 00:27:35.340 --rc genhtml_function_coverage=1 00:27:35.340 --rc genhtml_legend=1 00:27:35.340 --rc geninfo_all_blocks=1 00:27:35.340 --rc geninfo_unexecuted_blocks=1 00:27:35.340 00:27:35.340 ' 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.340 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:35.341 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:27:35.341 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:43.471 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:43.472 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:43.472 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:43.472 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:43.472 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:43.472 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:43.472 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:43.472 altname enp217s0f0np0 00:27:43.472 altname ens818f0np0 00:27:43.472 inet 192.168.100.8/24 scope global mlx_0_0 00:27:43.472 valid_lft forever preferred_lft forever 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:43.472 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:43.472 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:43.472 altname enp217s0f1np1 00:27:43.472 altname ens818f1np1 00:27:43.472 inet 192.168.100.9/24 scope global mlx_0_1 00:27:43.472 valid_lft forever preferred_lft forever 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:43.472 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:43.473 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:43.732 192.168.100.9' 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:43.732 192.168.100.9' 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:43.732 192.168.100.9' 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=1795107 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 1795107 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 1795107 ']' 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.732 11:00:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:43.732 [2024-11-28 11:00:22.511629] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:27:43.733 [2024-11-28 11:00:22.511726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.992 [2024-11-28 11:00:22.665849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.992 [2024-11-28 11:00:22.769417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.992 [2024-11-28 11:00:22.769460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.992 [2024-11-28 11:00:22.769472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.992 [2024-11-28 11:00:22.769484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.992 [2024-11-28 11:00:22.769493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.992 [2024-11-28 11:00:22.771874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.992 [2024-11-28 11:00:22.771947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.992 [2024-11-28 11:00:22.771982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.992 [2024-11-28 11:00:22.771990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.561 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.561 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:27:44.561 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:44.561 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:44.561 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:44.561 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.561 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:27:44.561 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.561 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:44.561 [2024-11-28 11:00:23.429256] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fb286d0f940) succeed. 00:27:44.561 [2024-11-28 11:00:23.439548] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fb2863bd940) succeed. 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:44.820 Malloc0 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:44.820 [2024-11-28 11:00:23.626593] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.820 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.760 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:46.066 Malloc1 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:46.066 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.067 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:46.067 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.067 11:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.113 Malloc2 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.113 11:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:27:48.051 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:27:48.051 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:27:48.051 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:48.051 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:27:48.051 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:48.052 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:27:48.052 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:27:48.052 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:48.052 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:27:48.052 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.052 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:48.052 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.052 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:48.052 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.052 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:48.311 Malloc3 00:27:48.311 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.311 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:48.311 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.311 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:48.311 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.311 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:48.311 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.311 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:48.311 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.311 11:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.249 11:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:49.249 Malloc4 00:27:49.249 11:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.249 11:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:49.249 11:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.249 11:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:49.249 11:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.249 11:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:27:49.249 11:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.249 11:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:49.249 11:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.249 11:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:27:50.188 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:27:50.188 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:27:50.188 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:50.188 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:50.449 Malloc5 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.449 11:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:27:51.388 11:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:27:51.388 11:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:27:51.388 11:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:51.388 11:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:27:51.388 11:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:51.388 11:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:27:51.388 11:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:27:51.388 11:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:27:51.388 [global] 00:27:51.388 thread=1 00:27:51.388 invalidate=1 00:27:51.388 rw=read 00:27:51.388 time_based=1 00:27:51.388 runtime=10 00:27:51.388 ioengine=libaio 00:27:51.388 direct=1 00:27:51.388 bs=1048576 00:27:51.388 iodepth=128 00:27:51.388 norandommap=1 00:27:51.388 numjobs=13 00:27:51.388 00:27:51.388 [job0] 00:27:51.388 filename=/dev/nvme0n1 00:27:51.388 [job1] 00:27:51.388 filename=/dev/nvme1n1 00:27:51.388 [job2] 00:27:51.388 filename=/dev/nvme2n1 00:27:51.388 [job3] 00:27:51.388 filename=/dev/nvme3n1 00:27:51.388 [job4] 00:27:51.388 filename=/dev/nvme4n1 00:27:51.388 [job5] 00:27:51.388 filename=/dev/nvme5n1 00:27:51.647 Could not set queue depth (nvme0n1) 00:27:51.647 Could not set queue depth (nvme1n1) 00:27:51.647 Could not set queue depth (nvme2n1) 00:27:51.647 Could not set queue depth (nvme3n1) 00:27:51.647 Could not set queue depth (nvme4n1) 00:27:51.647 Could not set queue depth (nvme5n1) 00:27:51.906 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:51.906 ... 00:27:51.906 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:51.906 ... 00:27:51.906 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:51.906 ... 00:27:51.906 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:51.907 ... 00:27:51.907 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:51.907 ... 00:27:51.907 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:51.907 ... 00:27:51.907 fio-3.35 00:27:51.907 Starting 78 threads 00:28:06.804 00:28:06.804 job0: (groupid=0, jobs=1): err= 0: pid=1796713: Thu Nov 28 11:00:43 2024 00:28:06.804 read: IOPS=113, BW=114MiB/s (119MB/s)(1380MiB/12136msec) 00:28:06.804 slat (usec): min=40, max=2162.2k, avg=7257.81, stdev=86735.99 00:28:06.804 clat (msec): min=285, max=10660, avg=874.06, stdev=1386.07 00:28:06.804 lat (msec): min=287, max=10704, avg=881.32, stdev=1392.91 00:28:06.804 clat percentiles (msec): 00:28:06.804 | 1.00th=[ 288], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 292], 00:28:06.804 | 30.00th=[ 296], 40.00th=[ 305], 50.00th=[ 426], 60.00th=[ 430], 00:28:06.804 | 70.00th=[ 435], 80.00th=[ 460], 90.00th=[ 3910], 95.00th=[ 4799], 00:28:06.804 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[10671], 00:28:06.804 | 99.99th=[10671] 00:28:06.804 bw ( KiB/s): min= 1899, max=446464, per=8.61%, avg=256599.50, stdev=166777.84, samples=10 00:28:06.804 iops : min= 1, max= 436, avg=250.50, stdev=163.01, samples=10 00:28:06.804 lat (msec) : 500=84.93%, 750=3.04%, >=2000=12.03% 00:28:06.804 cpu : usr=0.06%, sys=1.90%, ctx=1254, majf=0, minf=32769 00:28:06.804 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:28:06.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.804 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.804 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.804 job0: (groupid=0, jobs=1): err= 0: pid=1796714: Thu Nov 28 11:00:43 2024 00:28:06.804 read: IOPS=13, BW=13.6MiB/s (14.3MB/s)(165MiB/12131msec) 00:28:06.804 slat (usec): min=925, max=2142.6k, avg=60733.68, stdev=289069.26 00:28:06.804 clat (msec): min=2108, max=10724, avg=5132.03, stdev=2785.06 00:28:06.804 lat (msec): min=2200, max=11705, avg=5192.76, stdev=2825.35 00:28:06.804 clat percentiles (msec): 00:28:06.804 | 1.00th=[ 2198], 5.00th=[ 2333], 10.00th=[ 2433], 20.00th=[ 2769], 00:28:06.804 | 30.00th=[ 3071], 40.00th=[ 3373], 50.00th=[ 3775], 60.00th=[ 4111], 00:28:06.804 | 70.00th=[ 6409], 80.00th=[ 9329], 90.00th=[ 9329], 95.00th=[ 9463], 00:28:06.804 | 99.00th=[ 9597], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:28:06.804 | 99.99th=[10671] 00:28:06.804 bw ( KiB/s): min= 1928, max=49152, per=0.87%, avg=25901.33, stdev=23620.29, samples=3 00:28:06.804 iops : min= 1, max= 48, avg=25.00, stdev=23.52, samples=3 00:28:06.804 lat (msec) : >=2000=100.00% 00:28:06.804 cpu : usr=0.03%, sys=0.99%, ctx=416, majf=0, minf=32769 00:28:06.804 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.7%, 32=19.4%, >=64=61.8% 00:28:06.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.804 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:28:06.804 issued rwts: total=165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.804 job0: (groupid=0, jobs=1): err= 0: pid=1796715: Thu Nov 28 11:00:43 2024 00:28:06.804 read: IOPS=15, BW=15.3MiB/s (16.1MB/s)(155MiB/10122msec) 00:28:06.804 slat (usec): min=1126, max=3220.5k, avg=64533.51, stdev=351905.26 00:28:06.804 clat (msec): min=118, max=9880, avg=3116.15, stdev=3651.75 00:28:06.804 lat (msec): min=128, max=10006, avg=3180.68, stdev=3686.59 00:28:06.804 clat percentiles (msec): 00:28:06.804 | 1.00th=[ 129], 5.00th=[ 279], 10.00th=[ 401], 20.00th=[ 634], 00:28:06.804 | 30.00th=[ 869], 40.00th=[ 1116], 50.00th=[ 1418], 60.00th=[ 1720], 00:28:06.804 | 70.00th=[ 2039], 80.00th=[ 9597], 90.00th=[ 9731], 95.00th=[ 9731], 00:28:06.804 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:28:06.804 | 99.99th=[ 9866] 00:28:06.804 bw ( KiB/s): min=57229, max=57229, per=1.92%, avg=57229.00, stdev= 0.00, samples=1 00:28:06.804 iops : min= 55, max= 55, avg=55.00, stdev= 0.00, samples=1 00:28:06.804 lat (msec) : 250=4.52%, 500=10.32%, 750=10.97%, 1000=10.32%, 2000=33.55% 00:28:06.804 lat (msec) : >=2000=30.32% 00:28:06.804 cpu : usr=0.00%, sys=1.28%, ctx=410, majf=0, minf=32769 00:28:06.804 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.3%, 32=20.6%, >=64=59.4% 00:28:06.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.804 complete : 0=0.0%, 4=96.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.4% 00:28:06.804 issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.805 job0: (groupid=0, jobs=1): err= 0: pid=1796716: Thu Nov 28 11:00:43 2024 00:28:06.805 read: IOPS=38, BW=38.2MiB/s (40.1MB/s)(385MiB/10074msec) 00:28:06.805 slat (usec): min=82, max=2170.9k, avg=26067.96, stdev=187133.63 00:28:06.805 clat (msec): min=33, max=8573, avg=3173.19, stdev=3357.48 00:28:06.805 lat (msec): min=123, max=8577, avg=3199.26, stdev=3365.46 00:28:06.805 clat percentiles (msec): 00:28:06.805 | 1.00th=[ 128], 5.00th=[ 268], 10.00th=[ 567], 20.00th=[ 592], 00:28:06.805 | 30.00th=[ 617], 40.00th=[ 659], 50.00th=[ 1062], 60.00th=[ 1687], 00:28:06.805 | 70.00th=[ 7215], 80.00th=[ 7819], 90.00th=[ 8221], 95.00th=[ 8423], 00:28:06.805 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:28:06.805 | 99.99th=[ 8557] 00:28:06.805 bw ( KiB/s): min= 2048, max=143360, per=1.96%, avg=58517.11, stdev=47432.73, samples=9 00:28:06.805 iops : min= 2, max= 140, avg=57.11, stdev=46.32, samples=9 00:28:06.805 lat (msec) : 50=0.26%, 250=4.42%, 500=4.42%, 750=35.84%, 1000=4.42% 00:28:06.805 lat (msec) : 2000=14.81%, >=2000=35.84% 00:28:06.805 cpu : usr=0.00%, sys=1.71%, ctx=573, majf=0, minf=32769 00:28:06.805 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:28:06.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.805 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:06.805 issued rwts: total=385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.805 job0: (groupid=0, jobs=1): err= 0: pid=1796717: Thu Nov 28 11:00:43 2024 00:28:06.805 read: IOPS=1, BW=2035KiB/s (2084kB/s)(24.0MiB/12077msec) 00:28:06.805 slat (usec): min=459, max=2147.2k, avg=416936.79, stdev=809249.42 00:28:06.805 clat (msec): min=2069, max=12037, avg=7754.39, stdev=4216.91 00:28:06.805 lat (msec): min=2076, max=12076, avg=8171.33, stdev=4124.08 00:28:06.805 clat percentiles (msec): 00:28:06.805 | 1.00th=[ 2072], 5.00th=[ 2072], 10.00th=[ 2089], 20.00th=[ 2106], 00:28:06.805 | 30.00th=[ 4245], 40.00th=[ 6342], 50.00th=[ 8557], 60.00th=[10671], 00:28:06.805 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:28:06.805 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.805 | 99.99th=[12013] 00:28:06.805 lat (msec) : >=2000=100.00% 00:28:06.805 cpu : usr=0.00%, sys=0.14%, ctx=70, majf=0, minf=6145 00:28:06.805 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:28:06.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.805 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:06.805 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.805 job0: (groupid=0, jobs=1): err= 0: pid=1796718: Thu Nov 28 11:00:43 2024 00:28:06.805 read: IOPS=3, BW=3299KiB/s (3379kB/s)(39.0MiB/12104msec) 00:28:06.805 slat (usec): min=980, max=2074.0k, avg=256770.87, stdev=648459.10 00:28:06.805 clat (msec): min=2089, max=12101, avg=8451.49, stdev=3632.23 00:28:06.805 lat (msec): min=2107, max=12103, avg=8708.26, stdev=3522.97 00:28:06.805 clat percentiles (msec): 00:28:06.805 | 1.00th=[ 2089], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 4245], 00:28:06.805 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:28:06.805 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:28:06.805 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.805 | 99.99th=[12147] 00:28:06.805 lat (msec) : >=2000=100.00% 00:28:06.805 cpu : usr=0.00%, sys=0.35%, ctx=92, majf=0, minf=9985 00:28:06.805 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:28:06.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.805 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.805 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.805 job0: (groupid=0, jobs=1): err= 0: pid=1796719: Thu Nov 28 11:00:43 2024 00:28:06.805 read: IOPS=94, BW=94.6MiB/s (99.2MB/s)(958MiB/10127msec) 00:28:06.805 slat (usec): min=42, max=2064.6k, avg=10438.26, stdev=121081.48 00:28:06.805 clat (msec): min=120, max=10022, avg=1150.51, stdev=2134.54 00:28:06.805 lat (msec): min=128, max=10033, avg=1160.95, stdev=2146.73 00:28:06.805 clat percentiles (msec): 00:28:06.805 | 1.00th=[ 205], 5.00th=[ 275], 10.00th=[ 275], 20.00th=[ 275], 00:28:06.805 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 279], 60.00th=[ 279], 00:28:06.805 | 70.00th=[ 279], 80.00th=[ 284], 90.00th=[ 6678], 95.00th=[ 6745], 00:28:06.805 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[10000], 99.95th=[10000], 00:28:06.805 | 99.99th=[10000] 00:28:06.805 bw ( KiB/s): min= 6144, max=456704, per=7.14%, avg=212633.25, stdev=212026.37, samples=8 00:28:06.805 iops : min= 6, max= 446, avg=207.63, stdev=207.03, samples=8 00:28:06.805 lat (msec) : 250=1.25%, 500=82.78%, 2000=0.42%, >=2000=15.55% 00:28:06.805 cpu : usr=0.08%, sys=1.79%, ctx=802, majf=0, minf=32769 00:28:06.805 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:28:06.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.805 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.805 issued rwts: total=958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.805 job0: (groupid=0, jobs=1): err= 0: pid=1796720: Thu Nov 28 11:00:43 2024 00:28:06.805 read: IOPS=5, BW=5542KiB/s (5675kB/s)(66.0MiB/12194msec) 00:28:06.805 slat (usec): min=951, max=2099.9k, avg=151743.55, stdev=510329.26 00:28:06.805 clat (msec): min=2178, max=12188, avg=10361.93, stdev=2968.25 00:28:06.805 lat (msec): min=2201, max=12193, avg=10513.67, stdev=2794.37 00:28:06.805 clat percentiles (msec): 00:28:06.805 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 8557], 00:28:06.805 | 30.00th=[10805], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:28:06.805 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:06.805 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.805 | 99.99th=[12147] 00:28:06.805 lat (msec) : >=2000=100.00% 00:28:06.805 cpu : usr=0.00%, sys=0.63%, ctx=106, majf=0, minf=16897 00:28:06.805 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:28:06.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.805 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.805 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.805 job0: (groupid=0, jobs=1): err= 0: pid=1796721: Thu Nov 28 11:00:43 2024 00:28:06.805 read: IOPS=3, BW=3193KiB/s (3270kB/s)(38.0MiB/12187msec) 00:28:06.805 slat (usec): min=1383, max=2117.7k, avg=263361.33, stdev=669427.05 00:28:06.805 clat (msec): min=2178, max=12183, avg=10897.41, stdev=2645.98 00:28:06.805 lat (msec): min=4256, max=12186, avg=11160.77, stdev=2218.21 00:28:06.805 clat percentiles (msec): 00:28:06.805 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[10671], 00:28:06.805 | 30.00th=[12013], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:28:06.805 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:06.805 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.805 | 99.99th=[12147] 00:28:06.805 lat (msec) : >=2000=100.00% 00:28:06.805 cpu : usr=0.00%, sys=0.40%, ctx=79, majf=0, minf=9729 00:28:06.805 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:28:06.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.805 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.805 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.805 job0: (groupid=0, jobs=1): err= 0: pid=1796722: Thu Nov 28 11:00:43 2024 00:28:06.805 read: IOPS=9, BW=9499KiB/s (9727kB/s)(113MiB/12182msec) 00:28:06.805 slat (usec): min=926, max=2101.4k, avg=88894.16, stdev=395419.38 00:28:06.805 clat (msec): min=2136, max=12179, avg=9628.37, stdev=3300.05 00:28:06.805 lat (msec): min=4145, max=12181, avg=9717.26, stdev=3231.00 00:28:06.805 clat percentiles (msec): 00:28:06.805 | 1.00th=[ 4144], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 4329], 00:28:06.805 | 30.00th=[ 6477], 40.00th=[10671], 50.00th=[12013], 60.00th=[12013], 00:28:06.805 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:06.805 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.805 | 99.99th=[12147] 00:28:06.805 lat (msec) : >=2000=100.00% 00:28:06.805 cpu : usr=0.00%, sys=1.08%, ctx=104, majf=0, minf=28929 00:28:06.805 IO depths : 1=0.9%, 2=1.8%, 4=3.5%, 8=7.1%, 16=14.2%, 32=28.3%, >=64=44.2% 00:28:06.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.805 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.805 issued rwts: total=113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.805 job0: (groupid=0, jobs=1): err= 0: pid=1796723: Thu Nov 28 11:00:43 2024 00:28:06.805 read: IOPS=44, BW=44.4MiB/s (46.6MB/s)(537MiB/12094msec) 00:28:06.805 slat (usec): min=65, max=2098.3k, avg=18622.07, stdev=139084.80 00:28:06.805 clat (msec): min=413, max=8583, avg=2373.16, stdev=1933.23 00:28:06.805 lat (msec): min=420, max=9242, avg=2391.78, stdev=1954.71 00:28:06.805 clat percentiles (msec): 00:28:06.805 | 1.00th=[ 426], 5.00th=[ 435], 10.00th=[ 443], 20.00th=[ 531], 00:28:06.805 | 30.00th=[ 609], 40.00th=[ 667], 50.00th=[ 726], 60.00th=[ 3708], 00:28:06.805 | 70.00th=[ 3977], 80.00th=[ 4597], 90.00th=[ 4799], 95.00th=[ 4866], 00:28:06.805 | 99.00th=[ 6275], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:28:06.805 | 99.99th=[ 8557] 00:28:06.805 bw ( KiB/s): min= 6144, max=286720, per=3.52%, avg=104960.00, stdev=109496.20, samples=8 00:28:06.805 iops : min= 6, max= 280, avg=102.50, stdev=106.93, samples=8 00:28:06.805 lat (msec) : 500=16.57%, 750=34.26%, 1000=0.37%, >=2000=48.79% 00:28:06.805 cpu : usr=0.02%, sys=1.05%, ctx=815, majf=0, minf=32769 00:28:06.805 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.3% 00:28:06.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.805 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.805 issued rwts: total=537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.805 job0: (groupid=0, jobs=1): err= 0: pid=1796724: Thu Nov 28 11:00:43 2024 00:28:06.805 read: IOPS=19, BW=19.6MiB/s (20.6MB/s)(239MiB/12166msec) 00:28:06.805 slat (usec): min=630, max=2077.9k, avg=41879.35, stdev=212192.20 00:28:06.805 clat (msec): min=2087, max=6716, avg=4538.79, stdev=1534.85 00:28:06.806 lat (msec): min=2099, max=7706, avg=4580.67, stdev=1536.03 00:28:06.806 clat percentiles (msec): 00:28:06.806 | 1.00th=[ 2106], 5.00th=[ 2467], 10.00th=[ 2500], 20.00th=[ 2567], 00:28:06.806 | 30.00th=[ 2601], 40.00th=[ 4396], 50.00th=[ 5336], 60.00th=[ 5537], 00:28:06.806 | 70.00th=[ 5805], 80.00th=[ 5940], 90.00th=[ 6208], 95.00th=[ 6275], 00:28:06.806 | 99.00th=[ 6342], 99.50th=[ 6409], 99.90th=[ 6745], 99.95th=[ 6745], 00:28:06.806 | 99.99th=[ 6745] 00:28:06.806 bw ( KiB/s): min= 8192, max=112640, per=1.54%, avg=45875.20, stdev=48260.72, samples=5 00:28:06.806 iops : min= 8, max= 110, avg=44.80, stdev=47.13, samples=5 00:28:06.806 lat (msec) : >=2000=100.00% 00:28:06.806 cpu : usr=0.01%, sys=0.89%, ctx=642, majf=0, minf=32769 00:28:06.806 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.7%, 32=13.4%, >=64=73.6% 00:28:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.806 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:28:06.806 issued rwts: total=239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.806 job0: (groupid=0, jobs=1): err= 0: pid=1796725: Thu Nov 28 11:00:43 2024 00:28:06.806 read: IOPS=59, BW=59.3MiB/s (62.1MB/s)(717MiB/12099msec) 00:28:06.806 slat (usec): min=36, max=2102.9k, avg=16797.00, stdev=140733.36 00:28:06.806 clat (msec): min=52, max=6922, avg=1825.49, stdev=2101.82 00:28:06.806 lat (msec): min=574, max=6925, avg=1842.29, stdev=2107.14 00:28:06.806 clat percentiles (msec): 00:28:06.806 | 1.00th=[ 575], 5.00th=[ 584], 10.00th=[ 592], 20.00th=[ 625], 00:28:06.806 | 30.00th=[ 701], 40.00th=[ 743], 50.00th=[ 810], 60.00th=[ 869], 00:28:06.806 | 70.00th=[ 1003], 80.00th=[ 2769], 90.00th=[ 6611], 95.00th=[ 6745], 00:28:06.806 | 99.00th=[ 6879], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:28:06.806 | 99.99th=[ 6946] 00:28:06.806 bw ( KiB/s): min=13891, max=219136, per=4.05%, avg=120541.70, stdev=80144.25, samples=10 00:28:06.806 iops : min= 13, max= 214, avg=117.50, stdev=78.40, samples=10 00:28:06.806 lat (msec) : 100=0.14%, 750=40.03%, 1000=28.45%, 2000=8.23%, >=2000=23.15% 00:28:06.806 cpu : usr=0.02%, sys=0.90%, ctx=879, majf=0, minf=32769 00:28:06.806 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:28:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.806 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.806 issued rwts: total=717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.806 job1: (groupid=0, jobs=1): err= 0: pid=1796726: Thu Nov 28 11:00:43 2024 00:28:06.806 read: IOPS=2, BW=2971KiB/s (3042kB/s)(35.0MiB/12064msec) 00:28:06.806 slat (usec): min=973, max=2121.4k, avg=343105.08, stdev=742931.69 00:28:06.806 clat (msec): min=54, max=12012, avg=6074.02, stdev=4118.87 00:28:06.806 lat (msec): min=2079, max=12063, avg=6417.13, stdev=4102.83 00:28:06.806 clat percentiles (msec): 00:28:06.806 | 1.00th=[ 55], 5.00th=[ 2072], 10.00th=[ 2089], 20.00th=[ 2106], 00:28:06.806 | 30.00th=[ 2123], 40.00th=[ 4144], 50.00th=[ 4245], 60.00th=[ 6409], 00:28:06.806 | 70.00th=[ 8557], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:28:06.806 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.806 | 99.99th=[12013] 00:28:06.806 lat (msec) : 100=2.86%, >=2000=97.14% 00:28:06.806 cpu : usr=0.00%, sys=0.27%, ctx=79, majf=0, minf=8961 00:28:06.806 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:28:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.806 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.806 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.806 job1: (groupid=0, jobs=1): err= 0: pid=1796727: Thu Nov 28 11:00:43 2024 00:28:06.806 read: IOPS=37, BW=37.4MiB/s (39.2MB/s)(453MiB/12107msec) 00:28:06.806 slat (usec): min=47, max=2092.0k, avg=22127.40, stdev=180534.96 00:28:06.806 clat (msec): min=402, max=10924, avg=3308.42, stdev=4339.58 00:28:06.806 lat (msec): min=403, max=10927, avg=3330.55, stdev=4352.37 00:28:06.806 clat percentiles (msec): 00:28:06.806 | 1.00th=[ 405], 5.00th=[ 405], 10.00th=[ 409], 20.00th=[ 409], 00:28:06.806 | 30.00th=[ 414], 40.00th=[ 439], 50.00th=[ 527], 60.00th=[ 676], 00:28:06.806 | 70.00th=[ 4178], 80.00th=[10268], 90.00th=[10671], 95.00th=[10805], 00:28:06.806 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:28:06.806 | 99.99th=[10939] 00:28:06.806 bw ( KiB/s): min= 1398, max=288768, per=2.49%, avg=74106.78, stdev=110859.49, samples=9 00:28:06.806 iops : min= 1, max= 282, avg=72.22, stdev=108.35, samples=9 00:28:06.806 lat (msec) : 500=47.46%, 750=15.67%, 1000=4.86%, >=2000=32.01% 00:28:06.806 cpu : usr=0.02%, sys=1.16%, ctx=467, majf=0, minf=32770 00:28:06.806 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.1% 00:28:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.806 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.806 issued rwts: total=453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.806 job1: (groupid=0, jobs=1): err= 0: pid=1796728: Thu Nov 28 11:00:43 2024 00:28:06.806 read: IOPS=9, BW=9441KiB/s (9668kB/s)(93.0MiB/10087msec) 00:28:06.806 slat (usec): min=575, max=2063.5k, avg=107567.18, stdev=429370.81 00:28:06.806 clat (msec): min=82, max=10085, avg=5955.99, stdev=3831.31 00:28:06.806 lat (msec): min=91, max=10086, avg=6063.56, stdev=3804.95 00:28:06.806 clat percentiles (msec): 00:28:06.806 | 1.00th=[ 83], 5.00th=[ 126], 10.00th=[ 165], 20.00th=[ 2265], 00:28:06.806 | 30.00th=[ 2333], 40.00th=[ 4463], 50.00th=[ 6611], 60.00th=[ 8658], 00:28:06.806 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:28:06.806 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:28:06.806 | 99.99th=[10134] 00:28:06.806 lat (msec) : 100=2.15%, 250=16.13%, >=2000=81.72% 00:28:06.806 cpu : usr=0.00%, sys=0.57%, ctx=80, majf=0, minf=23809 00:28:06.806 IO depths : 1=1.1%, 2=2.2%, 4=4.3%, 8=8.6%, 16=17.2%, 32=34.4%, >=64=32.3% 00:28:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.806 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.806 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.806 job1: (groupid=0, jobs=1): err= 0: pid=1796729: Thu Nov 28 11:00:43 2024 00:28:06.806 read: IOPS=30, BW=30.0MiB/s (31.5MB/s)(303MiB/10095msec) 00:28:06.806 slat (usec): min=60, max=2120.4k, avg=33016.95, stdev=235345.06 00:28:06.806 clat (msec): min=88, max=9132, avg=4039.03, stdev=3870.12 00:28:06.806 lat (msec): min=105, max=9137, avg=4072.04, stdev=3872.30 00:28:06.806 clat percentiles (msec): 00:28:06.806 | 1.00th=[ 121], 5.00th=[ 230], 10.00th=[ 502], 20.00th=[ 558], 00:28:06.806 | 30.00th=[ 575], 40.00th=[ 592], 50.00th=[ 718], 60.00th=[ 4799], 00:28:06.806 | 70.00th=[ 8792], 80.00th=[ 8926], 90.00th=[ 9060], 95.00th=[ 9060], 00:28:06.806 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9194], 99.95th=[ 9194], 00:28:06.806 | 99.99th=[ 9194] 00:28:06.806 bw ( KiB/s): min= 6144, max=162116, per=1.73%, avg=51535.86, stdev=62703.22, samples=7 00:28:06.806 iops : min= 6, max= 158, avg=50.14, stdev=61.25, samples=7 00:28:06.806 lat (msec) : 100=0.33%, 250=4.95%, 500=4.29%, 750=40.92%, >=2000=49.50% 00:28:06.806 cpu : usr=0.00%, sys=1.21%, ctx=497, majf=0, minf=32769 00:28:06.806 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.2% 00:28:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.806 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:06.806 issued rwts: total=303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.806 job1: (groupid=0, jobs=1): err= 0: pid=1796730: Thu Nov 28 11:00:43 2024 00:28:06.806 read: IOPS=66, BW=66.0MiB/s (69.2MB/s)(797MiB/12070msec) 00:28:06.806 slat (usec): min=44, max=2177.5k, avg=15050.53, stdev=147750.71 00:28:06.806 clat (msec): min=71, max=9001, avg=1821.51, stdev=2875.20 00:28:06.806 lat (msec): min=432, max=9003, avg=1836.56, stdev=2884.27 00:28:06.806 clat percentiles (msec): 00:28:06.806 | 1.00th=[ 435], 5.00th=[ 435], 10.00th=[ 435], 20.00th=[ 439], 00:28:06.806 | 30.00th=[ 443], 40.00th=[ 447], 50.00th=[ 468], 60.00th=[ 527], 00:28:06.806 | 70.00th=[ 575], 80.00th=[ 1083], 90.00th=[ 8792], 95.00th=[ 8926], 00:28:06.806 | 99.00th=[ 8926], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:28:06.806 | 99.99th=[ 9060] 00:28:06.806 bw ( KiB/s): min=16384, max=299008, per=5.75%, avg=171171.50, stdev=129922.72, samples=8 00:28:06.806 iops : min= 16, max= 292, avg=167.00, stdev=126.96, samples=8 00:28:06.806 lat (msec) : 100=0.13%, 500=57.09%, 750=18.07%, 1000=4.14%, 2000=1.13% 00:28:06.806 lat (msec) : >=2000=19.45% 00:28:06.806 cpu : usr=0.00%, sys=1.10%, ctx=825, majf=0, minf=32769 00:28:06.806 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:28:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.806 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.806 issued rwts: total=797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.806 job1: (groupid=0, jobs=1): err= 0: pid=1796731: Thu Nov 28 11:00:43 2024 00:28:06.806 read: IOPS=11, BW=11.9MiB/s (12.5MB/s)(144MiB/12123msec) 00:28:06.806 slat (usec): min=1929, max=2125.2k, avg=69520.75, stdev=312961.96 00:28:06.806 clat (msec): min=2110, max=9720, avg=3791.35, stdev=1785.45 00:28:06.806 lat (msec): min=2159, max=9740, avg=3860.87, stdev=1863.38 00:28:06.806 clat percentiles (msec): 00:28:06.806 | 1.00th=[ 2165], 5.00th=[ 2232], 10.00th=[ 2366], 20.00th=[ 2500], 00:28:06.806 | 30.00th=[ 2869], 40.00th=[ 3205], 50.00th=[ 3440], 60.00th=[ 3641], 00:28:06.806 | 70.00th=[ 3910], 80.00th=[ 4111], 90.00th=[ 6342], 95.00th=[ 8490], 00:28:06.806 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:28:06.806 | 99.99th=[ 9731] 00:28:06.806 bw ( KiB/s): min= 2048, max=32833, per=0.59%, avg=17440.50, stdev=21768.28, samples=2 00:28:06.806 iops : min= 2, max= 32, avg=17.00, stdev=21.21, samples=2 00:28:06.806 lat (msec) : >=2000=100.00% 00:28:06.806 cpu : usr=0.01%, sys=0.77%, ctx=392, majf=0, minf=32769 00:28:06.806 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.6%, 16=11.1%, 32=22.2%, >=64=56.2% 00:28:06.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.806 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.6% 00:28:06.806 issued rwts: total=144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.807 job1: (groupid=0, jobs=1): err= 0: pid=1796732: Thu Nov 28 11:00:43 2024 00:28:06.807 read: IOPS=156, BW=156MiB/s (164MB/s)(1901MiB/12177msec) 00:28:06.807 slat (usec): min=51, max=1324.9k, avg=5275.52, stdev=30918.04 00:28:06.807 clat (msec): min=362, max=4376, avg=791.25, stdev=842.94 00:28:06.807 lat (msec): min=365, max=4380, avg=796.53, stdev=845.35 00:28:06.807 clat percentiles (msec): 00:28:06.807 | 1.00th=[ 376], 5.00th=[ 435], 10.00th=[ 439], 20.00th=[ 447], 00:28:06.807 | 30.00th=[ 575], 40.00th=[ 584], 50.00th=[ 592], 60.00th=[ 592], 00:28:06.807 | 70.00th=[ 617], 80.00th=[ 659], 90.00th=[ 726], 95.00th=[ 3641], 00:28:06.807 | 99.00th=[ 4212], 99.50th=[ 4329], 99.90th=[ 4396], 99.95th=[ 4396], 00:28:06.807 | 99.99th=[ 4396] 00:28:06.807 bw ( KiB/s): min= 1796, max=301056, per=6.77%, avg=201807.33, stdev=77556.55, samples=18 00:28:06.807 iops : min= 1, max= 294, avg=197.00, stdev=75.86, samples=18 00:28:06.807 lat (msec) : 500=26.20%, 750=64.81%, 1000=2.26%, >=2000=6.73% 00:28:06.807 cpu : usr=0.16%, sys=3.39%, ctx=1611, majf=0, minf=32769 00:28:06.807 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:28:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.807 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.807 issued rwts: total=1901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.807 job1: (groupid=0, jobs=1): err= 0: pid=1796733: Thu Nov 28 11:00:43 2024 00:28:06.807 read: IOPS=4, BW=4827KiB/s (4943kB/s)(57.0MiB/12091msec) 00:28:06.807 slat (usec): min=851, max=2133.3k, avg=211098.67, stdev=608421.72 00:28:06.807 clat (msec): min=57, max=12089, avg=7153.27, stdev=4048.17 00:28:06.807 lat (msec): min=2100, max=12090, avg=7364.37, stdev=3984.81 00:28:06.807 clat percentiles (msec): 00:28:06.807 | 1.00th=[ 58], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 4279], 00:28:06.807 | 30.00th=[ 4279], 40.00th=[ 4279], 50.00th=[ 6409], 60.00th=[ 8658], 00:28:06.807 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:28:06.807 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.807 | 99.99th=[12147] 00:28:06.807 lat (msec) : 100=1.75%, >=2000=98.25% 00:28:06.807 cpu : usr=0.00%, sys=0.50%, ctx=64, majf=0, minf=14593 00:28:06.807 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:28:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.807 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.807 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.807 job1: (groupid=0, jobs=1): err= 0: pid=1796734: Thu Nov 28 11:00:43 2024 00:28:06.807 read: IOPS=11, BW=11.3MiB/s (11.8MB/s)(136MiB/12063msec) 00:28:06.807 slat (usec): min=421, max=2116.6k, avg=88220.36, stdev=392936.69 00:28:06.807 clat (msec): min=63, max=11920, avg=10294.91, stdev=2685.58 00:28:06.807 lat (msec): min=2126, max=11922, avg=10383.13, stdev=2539.27 00:28:06.807 clat percentiles (msec): 00:28:06.807 | 1.00th=[ 2123], 5.00th=[ 2165], 10.00th=[ 6409], 20.00th=[ 9866], 00:28:06.807 | 30.00th=[11208], 40.00th=[11208], 50.00th=[11342], 60.00th=[11476], 00:28:06.807 | 70.00th=[11610], 80.00th=[11745], 90.00th=[11879], 95.00th=[11879], 00:28:06.807 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:28:06.807 | 99.99th=[11879] 00:28:06.807 bw ( KiB/s): min= 2043, max=10730, per=0.21%, avg=6386.50, stdev=6142.64, samples=2 00:28:06.807 iops : min= 1, max= 10, avg= 5.50, stdev= 6.36, samples=2 00:28:06.807 lat (msec) : 100=0.74%, >=2000=99.26% 00:28:06.807 cpu : usr=0.02%, sys=0.65%, ctx=201, majf=0, minf=32769 00:28:06.807 IO depths : 1=0.7%, 2=1.5%, 4=2.9%, 8=5.9%, 16=11.8%, 32=23.5%, >=64=53.7% 00:28:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.807 complete : 0=0.0%, 4=90.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=10.0% 00:28:06.807 issued rwts: total=136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.807 job1: (groupid=0, jobs=1): err= 0: pid=1796735: Thu Nov 28 11:00:43 2024 00:28:06.807 read: IOPS=32, BW=32.9MiB/s (34.5MB/s)(400MiB/12147msec) 00:28:06.807 slat (usec): min=126, max=2151.4k, avg=25155.73, stdev=184163.13 00:28:06.807 clat (msec): min=527, max=8527, avg=3731.20, stdev=3088.70 00:28:06.807 lat (msec): min=531, max=8529, avg=3756.35, stdev=3094.90 00:28:06.807 clat percentiles (msec): 00:28:06.807 | 1.00th=[ 531], 5.00th=[ 542], 10.00th=[ 558], 20.00th=[ 584], 00:28:06.807 | 30.00th=[ 718], 40.00th=[ 2333], 50.00th=[ 2735], 60.00th=[ 3104], 00:28:06.807 | 70.00th=[ 7215], 80.00th=[ 8087], 90.00th=[ 8288], 95.00th=[ 8423], 00:28:06.807 | 99.00th=[ 8490], 99.50th=[ 8490], 99.90th=[ 8557], 99.95th=[ 8557], 00:28:06.807 | 99.99th=[ 8557] 00:28:06.807 bw ( KiB/s): min= 1961, max=135168, per=1.88%, avg=55901.70, stdev=52500.29, samples=10 00:28:06.807 iops : min= 1, max= 132, avg=54.50, stdev=51.37, samples=10 00:28:06.807 lat (msec) : 750=30.75%, 2000=4.75%, >=2000=64.50% 00:28:06.807 cpu : usr=0.01%, sys=1.19%, ctx=803, majf=0, minf=32769 00:28:06.807 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.2% 00:28:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.807 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:06.807 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.807 job1: (groupid=0, jobs=1): err= 0: pid=1796736: Thu Nov 28 11:00:43 2024 00:28:06.807 read: IOPS=41, BW=41.9MiB/s (43.9MB/s)(508MiB/12129msec) 00:28:06.807 slat (usec): min=42, max=2094.5k, avg=19699.89, stdev=142354.86 00:28:06.807 clat (msec): min=486, max=12047, avg=2631.83, stdev=2849.80 00:28:06.807 lat (msec): min=488, max=12048, avg=2651.53, stdev=2869.19 00:28:06.807 clat percentiles (msec): 00:28:06.807 | 1.00th=[ 489], 5.00th=[ 489], 10.00th=[ 489], 20.00th=[ 527], 00:28:06.807 | 30.00th=[ 531], 40.00th=[ 535], 50.00th=[ 676], 60.00th=[ 2668], 00:28:06.807 | 70.00th=[ 3775], 80.00th=[ 5269], 90.00th=[ 6208], 95.00th=[ 8658], 00:28:06.807 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.807 | 99.99th=[12013] 00:28:06.807 bw ( KiB/s): min= 6144, max=237568, per=2.62%, avg=78028.80, stdev=88864.48, samples=10 00:28:06.807 iops : min= 6, max= 232, avg=76.20, stdev=86.78, samples=10 00:28:06.807 lat (msec) : 500=16.93%, 750=36.81%, 1000=2.76%, >=2000=43.50% 00:28:06.807 cpu : usr=0.03%, sys=1.11%, ctx=726, majf=0, minf=32769 00:28:06.807 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:28:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.807 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.807 issued rwts: total=508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.807 job1: (groupid=0, jobs=1): err= 0: pid=1796737: Thu Nov 28 11:00:43 2024 00:28:06.807 read: IOPS=5, BW=5654KiB/s (5789kB/s)(67.0MiB/12135msec) 00:28:06.807 slat (usec): min=911, max=2100.8k, avg=149733.09, stdev=506699.99 00:28:06.807 clat (msec): min=2101, max=12131, avg=9744.17, stdev=3212.31 00:28:06.807 lat (msec): min=4109, max=12134, avg=9893.90, stdev=3081.85 00:28:06.807 clat percentiles (msec): 00:28:06.807 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 6342], 00:28:06.807 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[11879], 60.00th=[12013], 00:28:06.807 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:06.807 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.807 | 99.99th=[12147] 00:28:06.807 lat (msec) : >=2000=100.00% 00:28:06.807 cpu : usr=0.00%, sys=0.64%, ctx=93, majf=0, minf=17153 00:28:06.807 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:28:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.807 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.807 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.807 job1: (groupid=0, jobs=1): err= 0: pid=1796738: Thu Nov 28 11:00:43 2024 00:28:06.807 read: IOPS=1, BW=1608KiB/s (1647kB/s)(19.0MiB/12096msec) 00:28:06.807 slat (msec): min=6, max=2151, avg=527.02, stdev=873.66 00:28:06.807 clat (msec): min=2081, max=12035, avg=7341.44, stdev=3761.60 00:28:06.807 lat (msec): min=2110, max=12094, avg=7868.45, stdev=3684.39 00:28:06.807 clat percentiles (msec): 00:28:06.807 | 1.00th=[ 2089], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 4279], 00:28:06.807 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 8557], 60.00th=[ 8658], 00:28:06.807 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:28:06.807 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.807 | 99.99th=[12013] 00:28:06.807 lat (msec) : >=2000=100.00% 00:28:06.807 cpu : usr=0.01%, sys=0.13%, ctx=68, majf=0, minf=4865 00:28:06.807 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:28:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.807 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:06.807 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.807 job2: (groupid=0, jobs=1): err= 0: pid=1796739: Thu Nov 28 11:00:43 2024 00:28:06.807 read: IOPS=6, BW=6242KiB/s (6392kB/s)(74.0MiB/12140msec) 00:28:06.807 slat (usec): min=910, max=2120.4k, avg=135680.21, stdev=487993.62 00:28:06.807 clat (msec): min=2098, max=12137, avg=11002.68, stdev=2460.95 00:28:06.807 lat (msec): min=2153, max=12139, avg=11138.36, stdev=2229.18 00:28:06.807 clat percentiles (msec): 00:28:06.807 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6409], 20.00th=[10671], 00:28:06.807 | 30.00th=[11879], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:28:06.807 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:06.807 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.807 | 99.99th=[12147] 00:28:06.807 lat (msec) : >=2000=100.00% 00:28:06.807 cpu : usr=0.00%, sys=0.71%, ctx=106, majf=0, minf=18945 00:28:06.807 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:28:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.807 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.807 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.807 job2: (groupid=0, jobs=1): err= 0: pid=1796740: Thu Nov 28 11:00:43 2024 00:28:06.807 read: IOPS=12, BW=12.4MiB/s (13.0MB/s)(150MiB/12066msec) 00:28:06.808 slat (usec): min=518, max=2171.8k, avg=80071.82, stdev=350051.96 00:28:06.808 clat (msec): min=54, max=9548, avg=3346.47, stdev=1493.68 00:28:06.808 lat (msec): min=1958, max=9556, avg=3426.54, stdev=1552.19 00:28:06.808 clat percentiles (msec): 00:28:06.808 | 1.00th=[ 1938], 5.00th=[ 1972], 10.00th=[ 2165], 20.00th=[ 2467], 00:28:06.808 | 30.00th=[ 2702], 40.00th=[ 2903], 50.00th=[ 3071], 60.00th=[ 3272], 00:28:06.808 | 70.00th=[ 3540], 80.00th=[ 3742], 90.00th=[ 4044], 95.00th=[ 6208], 00:28:06.808 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:28:06.808 | 99.99th=[ 9597] 00:28:06.808 bw ( KiB/s): min=43914, max=43914, per=1.47%, avg=43914.00, stdev= 0.00, samples=1 00:28:06.808 iops : min= 42, max= 42, avg=42.00, stdev= 0.00, samples=1 00:28:06.808 lat (msec) : 100=0.67%, 2000=6.67%, >=2000=92.67% 00:28:06.808 cpu : usr=0.02%, sys=0.83%, ctx=393, majf=0, minf=32769 00:28:06.808 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=5.3%, 16=10.7%, 32=21.3%, >=64=58.0% 00:28:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.808 complete : 0=0.0%, 4=95.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.2% 00:28:06.808 issued rwts: total=150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.808 job2: (groupid=0, jobs=1): err= 0: pid=1796741: Thu Nov 28 11:00:43 2024 00:28:06.808 read: IOPS=7, BW=7907KiB/s (8096kB/s)(78.0MiB/10102msec) 00:28:06.808 slat (usec): min=446, max=2071.3k, avg=128346.04, stdev=464571.21 00:28:06.808 clat (msec): min=90, max=10092, avg=7584.11, stdev=3559.78 00:28:06.808 lat (msec): min=108, max=10101, avg=7712.46, stdev=3465.32 00:28:06.808 clat percentiles (msec): 00:28:06.808 | 1.00th=[ 91], 5.00th=[ 140], 10.00th=[ 197], 20.00th=[ 4463], 00:28:06.808 | 30.00th=[ 6611], 40.00th=[ 9866], 50.00th=[ 9866], 60.00th=[ 9866], 00:28:06.808 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:28:06.808 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:28:06.808 | 99.99th=[10134] 00:28:06.808 lat (msec) : 100=1.28%, 250=10.26%, >=2000=88.46% 00:28:06.808 cpu : usr=0.00%, sys=0.71%, ctx=123, majf=0, minf=19969 00:28:06.808 IO depths : 1=1.3%, 2=2.6%, 4=5.1%, 8=10.3%, 16=20.5%, 32=41.0%, >=64=19.2% 00:28:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.808 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.808 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.808 job2: (groupid=0, jobs=1): err= 0: pid=1796742: Thu Nov 28 11:00:43 2024 00:28:06.808 read: IOPS=18, BW=18.2MiB/s (19.1MB/s)(220MiB/12060msec) 00:28:06.808 slat (usec): min=448, max=2130.6k, avg=45586.92, stdev=276683.11 00:28:06.808 clat (msec): min=622, max=11838, avg=6674.51, stdev=4720.11 00:28:06.808 lat (msec): min=626, max=12055, avg=6720.09, stdev=4719.64 00:28:06.808 clat percentiles (msec): 00:28:06.808 | 1.00th=[ 642], 5.00th=[ 667], 10.00th=[ 693], 20.00th=[ 751], 00:28:06.808 | 30.00th=[ 785], 40.00th=[ 5067], 50.00th=[ 9329], 60.00th=[10805], 00:28:06.808 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11342], 95.00th=[11342], 00:28:06.808 | 99.00th=[11342], 99.50th=[11476], 99.90th=[11879], 99.95th=[11879], 00:28:06.808 | 99.99th=[11879] 00:28:06.808 bw ( KiB/s): min= 1406, max=96063, per=0.80%, avg=23701.00, stdev=32951.17, samples=8 00:28:06.808 iops : min= 1, max= 93, avg=22.88, stdev=32.02, samples=8 00:28:06.808 lat (msec) : 750=20.91%, 1000=11.36%, >=2000=67.73% 00:28:06.808 cpu : usr=0.02%, sys=0.73%, ctx=439, majf=0, minf=32769 00:28:06.808 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.3%, 32=14.5%, >=64=71.4% 00:28:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.808 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:28:06.808 issued rwts: total=220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.808 job2: (groupid=0, jobs=1): err= 0: pid=1796743: Thu Nov 28 11:00:43 2024 00:28:06.808 read: IOPS=5, BW=5747KiB/s (5885kB/s)(68.0MiB/12116msec) 00:28:06.808 slat (usec): min=770, max=2076.0k, avg=148110.07, stdev=504363.61 00:28:06.808 clat (msec): min=2043, max=12111, avg=9131.93, stdev=3258.75 00:28:06.808 lat (msec): min=2146, max=12114, avg=9280.04, stdev=3159.11 00:28:06.808 clat percentiles (msec): 00:28:06.808 | 1.00th=[ 2039], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:28:06.808 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12013], 00:28:06.808 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:06.808 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.808 | 99.99th=[12147] 00:28:06.808 lat (msec) : >=2000=100.00% 00:28:06.808 cpu : usr=0.00%, sys=0.62%, ctx=67, majf=0, minf=17409 00:28:06.808 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:28:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.808 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.808 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.808 job2: (groupid=0, jobs=1): err= 0: pid=1796744: Thu Nov 28 11:00:43 2024 00:28:06.808 read: IOPS=4, BW=4248KiB/s (4350kB/s)(42.0MiB/10125msec) 00:28:06.808 slat (usec): min=907, max=2084.0k, avg=238178.96, stdev=625046.35 00:28:06.808 clat (msec): min=120, max=10121, avg=7478.88, stdev=3797.83 00:28:06.808 lat (msec): min=129, max=10124, avg=7717.06, stdev=3635.33 00:28:06.808 clat percentiles (msec): 00:28:06.808 | 1.00th=[ 122], 5.00th=[ 148], 10.00th=[ 222], 20.00th=[ 2333], 00:28:06.808 | 30.00th=[ 6611], 40.00th=[ 8792], 50.00th=[ 9866], 60.00th=[10000], 00:28:06.808 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:28:06.808 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:28:06.808 | 99.99th=[10134] 00:28:06.808 lat (msec) : 250=14.29%, >=2000=85.71% 00:28:06.808 cpu : usr=0.01%, sys=0.40%, ctx=97, majf=0, minf=10753 00:28:06.808 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:28:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.808 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.808 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.808 job2: (groupid=0, jobs=1): err= 0: pid=1796745: Thu Nov 28 11:00:43 2024 00:28:06.808 read: IOPS=10, BW=10.2MiB/s (10.7MB/s)(123MiB/12069msec) 00:28:06.808 slat (usec): min=1450, max=2171.8k, avg=97641.40, stdev=389284.62 00:28:06.808 clat (msec): min=58, max=12052, avg=3759.50, stdev=2318.94 00:28:06.808 lat (msec): min=2161, max=12068, avg=3857.14, stdev=2412.78 00:28:06.808 clat percentiles (msec): 00:28:06.808 | 1.00th=[ 2165], 5.00th=[ 2232], 10.00th=[ 2333], 20.00th=[ 2534], 00:28:06.808 | 30.00th=[ 2702], 40.00th=[ 2937], 50.00th=[ 3171], 60.00th=[ 3440], 00:28:06.808 | 70.00th=[ 3742], 80.00th=[ 3943], 90.00th=[ 4279], 95.00th=[11879], 00:28:06.808 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.808 | 99.99th=[12013] 00:28:06.808 lat (msec) : 100=0.81%, >=2000=99.19% 00:28:06.808 cpu : usr=0.02%, sys=1.11%, ctx=393, majf=0, minf=31489 00:28:06.808 IO depths : 1=0.8%, 2=1.6%, 4=3.3%, 8=6.5%, 16=13.0%, 32=26.0%, >=64=48.8% 00:28:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.808 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.808 issued rwts: total=123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.808 job2: (groupid=0, jobs=1): err= 0: pid=1796746: Thu Nov 28 11:00:43 2024 00:28:06.808 read: IOPS=7, BW=7224KiB/s (7398kB/s)(85.0MiB/12048msec) 00:28:06.808 slat (usec): min=1805, max=2103.9k, avg=117771.47, stdev=426839.68 00:28:06.808 clat (msec): min=2036, max=12041, avg=5254.52, stdev=2695.46 00:28:06.808 lat (msec): min=2064, max=12047, avg=5372.29, stdev=2770.82 00:28:06.808 clat percentiles (msec): 00:28:06.808 | 1.00th=[ 2039], 5.00th=[ 2165], 10.00th=[ 3574], 20.00th=[ 3708], 00:28:06.808 | 30.00th=[ 3809], 40.00th=[ 3910], 50.00th=[ 4010], 60.00th=[ 4144], 00:28:06.808 | 70.00th=[ 4279], 80.00th=[ 6409], 90.00th=[10671], 95.00th=[12013], 00:28:06.808 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.808 | 99.99th=[12013] 00:28:06.808 lat (msec) : >=2000=100.00% 00:28:06.808 cpu : usr=0.00%, sys=0.58%, ctx=174, majf=0, minf=21761 00:28:06.808 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:28:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.808 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.808 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.808 job2: (groupid=0, jobs=1): err= 0: pid=1796747: Thu Nov 28 11:00:43 2024 00:28:06.808 read: IOPS=5, BW=5938KiB/s (6080kB/s)(59.0MiB/10175msec) 00:28:06.808 slat (usec): min=953, max=2117.4k, avg=170157.33, stdev=538937.16 00:28:06.808 clat (msec): min=134, max=10173, avg=8596.76, stdev=2877.80 00:28:06.808 lat (msec): min=209, max=10174, avg=8766.92, stdev=2657.19 00:28:06.808 clat percentiles (msec): 00:28:06.808 | 1.00th=[ 136], 5.00th=[ 2232], 10.00th=[ 2333], 20.00th=[ 6544], 00:28:06.808 | 30.00th=[ 9866], 40.00th=[10000], 50.00th=[10134], 60.00th=[10134], 00:28:06.808 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:28:06.808 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:28:06.808 | 99.99th=[10134] 00:28:06.808 lat (msec) : 250=3.39%, >=2000=96.61% 00:28:06.808 cpu : usr=0.00%, sys=0.74%, ctx=115, majf=0, minf=15105 00:28:06.808 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:28:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.808 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.808 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.808 job2: (groupid=0, jobs=1): err= 0: pid=1796748: Thu Nov 28 11:00:43 2024 00:28:06.808 read: IOPS=32, BW=32.3MiB/s (33.9MB/s)(389MiB/12049msec) 00:28:06.808 slat (usec): min=57, max=2143.7k, avg=25732.11, stdev=209880.55 00:28:06.808 clat (msec): min=283, max=11025, avg=3790.34, stdev=4772.56 00:28:06.808 lat (msec): min=284, max=11025, avg=3816.07, stdev=4784.02 00:28:06.808 clat percentiles (msec): 00:28:06.808 | 1.00th=[ 284], 5.00th=[ 288], 10.00th=[ 288], 20.00th=[ 288], 00:28:06.808 | 30.00th=[ 300], 40.00th=[ 359], 50.00th=[ 430], 60.00th=[ 542], 00:28:06.808 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:28:06.808 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:28:06.809 | 99.99th=[11073] 00:28:06.809 bw ( KiB/s): min= 1424, max=253952, per=2.25%, avg=67054.87, stdev=114896.52, samples=8 00:28:06.809 iops : min= 1, max= 248, avg=65.25, stdev=112.19, samples=8 00:28:06.809 lat (msec) : 500=56.81%, 750=6.68%, >=2000=36.50% 00:28:06.809 cpu : usr=0.00%, sys=1.00%, ctx=495, majf=0, minf=32769 00:28:06.809 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.2%, >=64=83.8% 00:28:06.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.809 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:06.809 issued rwts: total=389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.809 job2: (groupid=0, jobs=1): err= 0: pid=1796749: Thu Nov 28 11:00:43 2024 00:28:06.809 read: IOPS=9, BW=9500KiB/s (9728kB/s)(112MiB/12073msec) 00:28:06.809 slat (usec): min=1853, max=4279.4k, avg=107265.55, stdev=497388.42 00:28:06.809 clat (msec): min=58, max=11940, avg=3392.48, stdev=1605.97 00:28:06.809 lat (msec): min=2161, max=12072, avg=3499.75, stdev=1773.73 00:28:06.809 clat percentiles (msec): 00:28:06.809 | 1.00th=[ 2165], 5.00th=[ 2232], 10.00th=[ 2333], 20.00th=[ 2500], 00:28:06.809 | 30.00th=[ 2702], 40.00th=[ 2903], 50.00th=[ 3071], 60.00th=[ 3339], 00:28:06.809 | 70.00th=[ 3641], 80.00th=[ 3876], 90.00th=[ 4144], 95.00th=[ 4329], 00:28:06.809 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:28:06.809 | 99.99th=[11879] 00:28:06.809 lat (msec) : 100=0.89%, >=2000=99.11% 00:28:06.809 cpu : usr=0.00%, sys=1.04%, ctx=383, majf=0, minf=28673 00:28:06.809 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.1%, 16=14.3%, 32=28.6%, >=64=43.8% 00:28:06.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.809 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.809 issued rwts: total=112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.809 job2: (groupid=0, jobs=1): err= 0: pid=1796750: Thu Nov 28 11:00:43 2024 00:28:06.809 read: IOPS=285, BW=285MiB/s (299MB/s)(2898MiB/10161msec) 00:28:06.809 slat (usec): min=38, max=2073.0k, avg=3475.02, stdev=38977.15 00:28:06.809 clat (msec): min=77, max=3363, avg=430.03, stdev=596.60 00:28:06.809 lat (msec): min=136, max=3364, avg=433.51, stdev=599.50 00:28:06.809 clat percentiles (msec): 00:28:06.809 | 1.00th=[ 136], 5.00th=[ 138], 10.00th=[ 138], 20.00th=[ 138], 00:28:06.809 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 140], 60.00th=[ 249], 00:28:06.809 | 70.00th=[ 550], 80.00th=[ 584], 90.00th=[ 751], 95.00th=[ 1053], 00:28:06.809 | 99.00th=[ 3239], 99.50th=[ 3306], 99.90th=[ 3373], 99.95th=[ 3373], 00:28:06.809 | 99.99th=[ 3373] 00:28:06.809 bw ( KiB/s): min=24576, max=944128, per=11.90%, avg=354505.50, stdev=322775.59, samples=16 00:28:06.809 iops : min= 24, max= 922, avg=346.12, stdev=315.24, samples=16 00:28:06.809 lat (msec) : 100=0.03%, 250=59.97%, 500=5.94%, 750=24.05%, 1000=4.76% 00:28:06.809 lat (msec) : 2000=0.86%, >=2000=4.38% 00:28:06.809 cpu : usr=0.11%, sys=3.49%, ctx=2840, majf=0, minf=32769 00:28:06.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:28:06.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.809 issued rwts: total=2898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.809 job2: (groupid=0, jobs=1): err= 0: pid=1796751: Thu Nov 28 11:00:43 2024 00:28:06.809 read: IOPS=5, BW=5486KiB/s (5618kB/s)(65.0MiB/12132msec) 00:28:06.809 slat (usec): min=844, max=2079.9k, avg=153900.74, stdev=513249.72 00:28:06.809 clat (msec): min=2127, max=12126, avg=8060.70, stdev=3787.59 00:28:06.809 lat (msec): min=2135, max=12131, avg=8214.60, stdev=3745.76 00:28:06.809 clat percentiles (msec): 00:28:06.809 | 1.00th=[ 2123], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4279], 00:28:06.809 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[10671], 00:28:06.809 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:28:06.809 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.809 | 99.99th=[12147] 00:28:06.809 lat (msec) : >=2000=100.00% 00:28:06.809 cpu : usr=0.00%, sys=0.52%, ctx=57, majf=0, minf=16641 00:28:06.809 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:28:06.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.809 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.809 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.809 job3: (groupid=0, jobs=1): err= 0: pid=1796752: Thu Nov 28 11:00:43 2024 00:28:06.809 read: IOPS=16, BW=16.5MiB/s (17.3MB/s)(199MiB/12082msec) 00:28:06.809 slat (usec): min=76, max=2111.6k, avg=60355.63, stdev=321934.56 00:28:06.809 clat (msec): min=69, max=11408, avg=7355.95, stdev=3889.83 00:28:06.809 lat (msec): min=1030, max=11417, avg=7416.31, stdev=3863.33 00:28:06.809 clat percentiles (msec): 00:28:06.809 | 1.00th=[ 1020], 5.00th=[ 1083], 10.00th=[ 1116], 20.00th=[ 3205], 00:28:06.809 | 30.00th=[ 4329], 40.00th=[ 7282], 50.00th=[ 8557], 60.00th=[10805], 00:28:06.809 | 70.00th=[10939], 80.00th=[11208], 90.00th=[11342], 95.00th=[11342], 00:28:06.809 | 99.00th=[11342], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:28:06.809 | 99.99th=[11476] 00:28:06.809 bw ( KiB/s): min= 6144, max=53248, per=0.81%, avg=24108.67, stdev=19941.26, samples=6 00:28:06.809 iops : min= 6, max= 52, avg=23.33, stdev=19.30, samples=6 00:28:06.809 lat (msec) : 100=0.50%, 2000=13.07%, >=2000=86.43% 00:28:06.809 cpu : usr=0.04%, sys=0.71%, ctx=372, majf=0, minf=32769 00:28:06.809 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=16.1%, >=64=68.3% 00:28:06.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.809 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:28:06.809 issued rwts: total=199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.809 job3: (groupid=0, jobs=1): err= 0: pid=1796753: Thu Nov 28 11:00:43 2024 00:28:06.809 read: IOPS=3, BW=3829KiB/s (3921kB/s)(45.0MiB/12034msec) 00:28:06.809 slat (usec): min=610, max=2096.1k, avg=265792.01, stdev=638981.62 00:28:06.809 clat (msec): min=73, max=12032, avg=6866.43, stdev=3661.66 00:28:06.809 lat (msec): min=2136, max=12033, avg=7132.22, stdev=3590.75 00:28:06.809 clat percentiles (msec): 00:28:06.809 | 1.00th=[ 73], 5.00th=[ 2198], 10.00th=[ 3809], 20.00th=[ 3943], 00:28:06.809 | 30.00th=[ 4044], 40.00th=[ 4144], 50.00th=[ 4329], 60.00th=[ 6477], 00:28:06.809 | 70.00th=[10671], 80.00th=[11879], 90.00th=[11879], 95.00th=[12013], 00:28:06.809 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.809 | 99.99th=[12013] 00:28:06.809 lat (msec) : 100=2.22%, >=2000=97.78% 00:28:06.809 cpu : usr=0.01%, sys=0.26%, ctx=145, majf=0, minf=11521 00:28:06.809 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:28:06.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.809 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.809 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.809 job3: (groupid=0, jobs=1): err= 0: pid=1796754: Thu Nov 28 11:00:43 2024 00:28:06.809 read: IOPS=5, BW=5949KiB/s (6092kB/s)(59.0MiB/10156msec) 00:28:06.809 slat (usec): min=933, max=2091.8k, avg=169763.75, stdev=533868.28 00:28:06.809 clat (msec): min=139, max=10152, avg=7976.29, stdev=3336.90 00:28:06.809 lat (msec): min=158, max=10155, avg=8146.05, stdev=3182.54 00:28:06.809 clat percentiles (msec): 00:28:06.809 | 1.00th=[ 140], 5.00th=[ 169], 10.00th=[ 2232], 20.00th=[ 4396], 00:28:06.809 | 30.00th=[ 8658], 40.00th=[10000], 50.00th=[10000], 60.00th=[10134], 00:28:06.809 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:28:06.809 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:28:06.809 | 99.99th=[10134] 00:28:06.809 lat (msec) : 250=6.78%, >=2000=93.22% 00:28:06.809 cpu : usr=0.00%, sys=0.67%, ctx=112, majf=0, minf=15105 00:28:06.809 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:28:06.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.809 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.809 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.809 job3: (groupid=0, jobs=1): err= 0: pid=1796755: Thu Nov 28 11:00:43 2024 00:28:06.809 read: IOPS=3, BW=3223KiB/s (3300kB/s)(38.0MiB/12073msec) 00:28:06.809 slat (usec): min=921, max=2080.8k, avg=315518.84, stdev=715575.87 00:28:06.809 clat (msec): min=82, max=12060, avg=6332.15, stdev=3528.45 00:28:06.809 lat (msec): min=2119, max=12072, avg=6647.67, stdev=3490.37 00:28:06.809 clat percentiles (msec): 00:28:06.809 | 1.00th=[ 83], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2165], 00:28:06.809 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 6477], 00:28:06.810 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[10805], 95.00th=[12013], 00:28:06.810 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.810 | 99.99th=[12013] 00:28:06.810 lat (msec) : 100=2.63%, >=2000=97.37% 00:28:06.810 cpu : usr=0.00%, sys=0.31%, ctx=63, majf=0, minf=9729 00:28:06.810 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:28:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.810 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.810 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.810 job3: (groupid=0, jobs=1): err= 0: pid=1796756: Thu Nov 28 11:00:43 2024 00:28:06.810 read: IOPS=7, BW=7531KiB/s (7712kB/s)(89.0MiB/12101msec) 00:28:06.810 slat (usec): min=910, max=2083.5k, avg=112435.05, stdev=439986.24 00:28:06.810 clat (msec): min=2093, max=12099, avg=7517.19, stdev=3501.02 00:28:06.810 lat (msec): min=2103, max=12100, avg=7629.62, stdev=3485.52 00:28:06.810 clat percentiles (msec): 00:28:06.810 | 1.00th=[ 2089], 5.00th=[ 2123], 10.00th=[ 4144], 20.00th=[ 4178], 00:28:06.810 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 6477], 60.00th=[ 8557], 00:28:06.810 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:28:06.810 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.810 | 99.99th=[12147] 00:28:06.810 lat (msec) : >=2000=100.00% 00:28:06.810 cpu : usr=0.02%, sys=0.73%, ctx=91, majf=0, minf=22785 00:28:06.810 IO depths : 1=1.1%, 2=2.2%, 4=4.5%, 8=9.0%, 16=18.0%, 32=36.0%, >=64=29.2% 00:28:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.810 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.810 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.810 job3: (groupid=0, jobs=1): err= 0: pid=1796757: Thu Nov 28 11:00:43 2024 00:28:06.810 read: IOPS=17, BW=17.5MiB/s (18.4MB/s)(213MiB/12157msec) 00:28:06.810 slat (usec): min=571, max=2105.0k, avg=47098.19, stdev=257609.73 00:28:06.810 clat (msec): min=1450, max=9263, avg=3830.83, stdev=2572.25 00:28:06.810 lat (msec): min=1460, max=9271, avg=3877.93, stdev=2591.79 00:28:06.810 clat percentiles (msec): 00:28:06.810 | 1.00th=[ 1469], 5.00th=[ 1502], 10.00th=[ 1536], 20.00th=[ 2232], 00:28:06.810 | 30.00th=[ 2433], 40.00th=[ 2668], 50.00th=[ 2903], 60.00th=[ 3138], 00:28:06.810 | 70.00th=[ 3406], 80.00th=[ 5873], 90.00th=[ 9194], 95.00th=[ 9194], 00:28:06.810 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9329], 99.95th=[ 9329], 00:28:06.810 | 99.99th=[ 9329] 00:28:06.810 bw ( KiB/s): min= 1841, max=90112, per=1.97%, avg=58640.33, stdev=49285.50, samples=3 00:28:06.810 iops : min= 1, max= 88, avg=57.00, stdev=48.59, samples=3 00:28:06.810 lat (msec) : 2000=17.84%, >=2000=82.16% 00:28:06.810 cpu : usr=0.02%, sys=1.11%, ctx=415, majf=0, minf=32769 00:28:06.810 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.8%, 16=7.5%, 32=15.0%, >=64=70.4% 00:28:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.810 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:28:06.810 issued rwts: total=213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.810 job3: (groupid=0, jobs=1): err= 0: pid=1796758: Thu Nov 28 11:00:43 2024 00:28:06.810 read: IOPS=31, BW=31.4MiB/s (32.9MB/s)(379MiB/12070msec) 00:28:06.810 slat (usec): min=69, max=2080.4k, avg=26403.06, stdev=192651.85 00:28:06.810 clat (msec): min=673, max=8506, avg=1638.81, stdev=1384.71 00:28:06.810 lat (msec): min=680, max=8509, avg=1665.22, stdev=1428.45 00:28:06.810 clat percentiles (msec): 00:28:06.810 | 1.00th=[ 676], 5.00th=[ 684], 10.00th=[ 684], 20.00th=[ 693], 00:28:06.810 | 30.00th=[ 718], 40.00th=[ 785], 50.00th=[ 835], 60.00th=[ 2056], 00:28:06.810 | 70.00th=[ 2366], 80.00th=[ 2534], 90.00th=[ 2769], 95.00th=[ 3004], 00:28:06.810 | 99.00th=[ 8356], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:28:06.810 | 99.99th=[ 8490] 00:28:06.810 bw ( KiB/s): min= 4096, max=192512, per=4.33%, avg=129024.00, stdev=84722.21, samples=4 00:28:06.810 iops : min= 4, max= 188, avg=126.00, stdev=82.74, samples=4 00:28:06.810 lat (msec) : 750=34.56%, 1000=25.33%, >=2000=40.11% 00:28:06.810 cpu : usr=0.02%, sys=1.18%, ctx=344, majf=0, minf=32769 00:28:06.810 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:28:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.810 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:06.810 issued rwts: total=379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.810 job3: (groupid=0, jobs=1): err= 0: pid=1796759: Thu Nov 28 11:00:43 2024 00:28:06.810 read: IOPS=32, BW=32.0MiB/s (33.6MB/s)(387MiB/12092msec) 00:28:06.810 slat (usec): min=180, max=2110.7k, avg=31045.53, stdev=210647.52 00:28:06.810 clat (msec): min=74, max=9254, avg=3818.53, stdev=2864.11 00:28:06.810 lat (msec): min=654, max=9266, avg=3849.57, stdev=2863.94 00:28:06.810 clat percentiles (msec): 00:28:06.810 | 1.00th=[ 651], 5.00th=[ 659], 10.00th=[ 667], 20.00th=[ 701], 00:28:06.810 | 30.00th=[ 1586], 40.00th=[ 2500], 50.00th=[ 2903], 60.00th=[ 3306], 00:28:06.810 | 70.00th=[ 7215], 80.00th=[ 7684], 90.00th=[ 7886], 95.00th=[ 8020], 00:28:06.810 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:28:06.810 | 99.99th=[ 9194] 00:28:06.810 bw ( KiB/s): min= 6144, max=122634, per=1.98%, avg=58905.44, stdev=41309.12, samples=9 00:28:06.810 iops : min= 6, max= 119, avg=57.33, stdev=40.31, samples=9 00:28:06.810 lat (msec) : 100=0.26%, 750=23.26%, 1000=0.52%, 2000=8.79%, >=2000=67.18% 00:28:06.810 cpu : usr=0.03%, sys=0.88%, ctx=702, majf=0, minf=32769 00:28:06.810 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.3%, >=64=83.7% 00:28:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.810 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:06.810 issued rwts: total=387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.810 job3: (groupid=0, jobs=1): err= 0: pid=1796760: Thu Nov 28 11:00:43 2024 00:28:06.810 read: IOPS=4, BW=4209KiB/s (4310kB/s)(50.0MiB/12164msec) 00:28:06.810 slat (usec): min=827, max=4256.1k, avg=200563.28, stdev=731288.14 00:28:06.810 clat (msec): min=2135, max=12159, avg=10922.53, stdev=2831.22 00:28:06.810 lat (msec): min=2164, max=12163, avg=11123.09, stdev=2535.81 00:28:06.810 clat percentiles (msec): 00:28:06.810 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[10805], 00:28:06.810 | 30.00th=[12013], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:28:06.810 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:06.810 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.810 | 99.99th=[12147] 00:28:06.810 lat (msec) : >=2000=100.00% 00:28:06.810 cpu : usr=0.00%, sys=0.54%, ctx=97, majf=0, minf=12801 00:28:06.810 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:28:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.810 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.810 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.810 job3: (groupid=0, jobs=1): err= 0: pid=1796761: Thu Nov 28 11:00:43 2024 00:28:06.810 read: IOPS=3, BW=3382KiB/s (3463kB/s)(40.0MiB/12111msec) 00:28:06.810 slat (usec): min=919, max=2103.3k, avg=300932.34, stdev=697097.03 00:28:06.810 clat (msec): min=72, max=12108, avg=8777.92, stdev=3795.50 00:28:06.810 lat (msec): min=2136, max=12110, avg=9078.85, stdev=3557.34 00:28:06.810 clat percentiles (msec): 00:28:06.810 | 1.00th=[ 73], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 4329], 00:28:06.810 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[11879], 00:28:06.810 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:06.810 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.810 | 99.99th=[12147] 00:28:06.810 lat (msec) : 100=2.50%, >=2000=97.50% 00:28:06.810 cpu : usr=0.00%, sys=0.30%, ctx=91, majf=0, minf=10241 00:28:06.810 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:28:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.810 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.810 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.810 job3: (groupid=0, jobs=1): err= 0: pid=1796762: Thu Nov 28 11:00:43 2024 00:28:06.810 read: IOPS=5, BW=5729KiB/s (5867kB/s)(68.0MiB/12154msec) 00:28:06.810 slat (usec): min=656, max=4261.3k, avg=147368.89, stdev=630201.82 00:28:06.810 clat (msec): min=2131, max=12152, avg=11051.84, stdev=2601.53 00:28:06.810 lat (msec): min=2161, max=12153, avg=11199.21, stdev=2361.46 00:28:06.810 clat percentiles (msec): 00:28:06.810 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[11879], 00:28:06.810 | 30.00th=[12013], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:28:06.810 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:06.810 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.810 | 99.99th=[12147] 00:28:06.810 lat (msec) : >=2000=100.00% 00:28:06.810 cpu : usr=0.01%, sys=0.63%, ctx=103, majf=0, minf=17409 00:28:06.810 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:28:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.810 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.810 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.810 job3: (groupid=0, jobs=1): err= 0: pid=1796763: Thu Nov 28 11:00:43 2024 00:28:06.810 read: IOPS=2, BW=2122KiB/s (2173kB/s)(25.0MiB/12066msec) 00:28:06.810 slat (usec): min=943, max=2101.9k, avg=479700.48, stdev=852368.85 00:28:06.810 clat (msec): min=72, max=11955, avg=7508.19, stdev=3279.67 00:28:06.810 lat (msec): min=2136, max=12065, avg=7987.89, stdev=3013.06 00:28:06.810 clat percentiles (msec): 00:28:06.810 | 1.00th=[ 73], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4279], 00:28:06.810 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[ 8557], 00:28:06.810 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:28:06.810 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.810 | 99.99th=[12013] 00:28:06.810 lat (msec) : 100=4.00%, >=2000=96.00% 00:28:06.810 cpu : usr=0.00%, sys=0.22%, ctx=91, majf=0, minf=6401 00:28:06.810 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:28:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.811 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:06.811 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.811 job3: (groupid=0, jobs=1): err= 0: pid=1796764: Thu Nov 28 11:00:43 2024 00:28:06.811 read: IOPS=5, BW=5279KiB/s (5406kB/s)(52.0MiB/10086msec) 00:28:06.811 slat (usec): min=854, max=2063.7k, avg=192352.15, stdev=562587.75 00:28:06.811 clat (msec): min=82, max=10084, avg=5738.31, stdev=4116.88 00:28:06.811 lat (msec): min=94, max=10085, avg=5930.66, stdev=4080.98 00:28:06.811 clat percentiles (msec): 00:28:06.811 | 1.00th=[ 84], 5.00th=[ 96], 10.00th=[ 131], 20.00th=[ 178], 00:28:06.811 | 30.00th=[ 2265], 40.00th=[ 4463], 50.00th=[ 6611], 60.00th=[ 8792], 00:28:06.811 | 70.00th=[10000], 80.00th=[10000], 90.00th=[10000], 95.00th=[10134], 00:28:06.811 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:28:06.811 | 99.99th=[10134] 00:28:06.811 lat (msec) : 100=5.77%, 250=21.15%, >=2000=73.08% 00:28:06.811 cpu : usr=0.00%, sys=0.51%, ctx=90, majf=0, minf=13313 00:28:06.811 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:28:06.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.811 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.811 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.811 job4: (groupid=0, jobs=1): err= 0: pid=1796765: Thu Nov 28 11:00:43 2024 00:28:06.811 read: IOPS=16, BW=16.6MiB/s (17.4MB/s)(199MiB/12018msec) 00:28:06.811 slat (usec): min=403, max=2227.3k, avg=50348.36, stdev=295448.15 00:28:06.811 clat (msec): min=268, max=8360, avg=5414.60, stdev=3625.44 00:28:06.811 lat (msec): min=270, max=8362, avg=5464.95, stdev=3605.45 00:28:06.811 clat percentiles (msec): 00:28:06.811 | 1.00th=[ 268], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 275], 00:28:06.811 | 30.00th=[ 1989], 40.00th=[ 6409], 50.00th=[ 8154], 60.00th=[ 8221], 00:28:06.811 | 70.00th=[ 8221], 80.00th=[ 8288], 90.00th=[ 8288], 95.00th=[ 8356], 00:28:06.811 | 99.00th=[ 8356], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:28:06.811 | 99.99th=[ 8356] 00:28:06.811 bw ( KiB/s): min= 1484, max=131072, per=1.23%, avg=36719.75, stdev=62964.04, samples=4 00:28:06.811 iops : min= 1, max= 128, avg=35.50, stdev=61.73, samples=4 00:28:06.811 lat (msec) : 500=29.65%, 2000=1.51%, >=2000=68.84% 00:28:06.811 cpu : usr=0.00%, sys=0.61%, ctx=391, majf=0, minf=32769 00:28:06.811 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=16.1%, >=64=68.3% 00:28:06.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.811 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:28:06.811 issued rwts: total=199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.811 job4: (groupid=0, jobs=1): err= 0: pid=1796766: Thu Nov 28 11:00:43 2024 00:28:06.811 read: IOPS=30, BW=30.3MiB/s (31.8MB/s)(367MiB/12117msec) 00:28:06.811 slat (usec): min=44, max=2061.1k, avg=27571.58, stdev=212615.03 00:28:06.811 clat (msec): min=456, max=10974, avg=4070.80, stdev=4231.86 00:28:06.811 lat (msec): min=458, max=10974, avg=4098.37, stdev=4243.82 00:28:06.811 clat percentiles (msec): 00:28:06.811 | 1.00th=[ 460], 5.00th=[ 481], 10.00th=[ 498], 20.00th=[ 542], 00:28:06.811 | 30.00th=[ 575], 40.00th=[ 609], 50.00th=[ 726], 60.00th=[ 4245], 00:28:06.811 | 70.00th=[ 6812], 80.00th=[10671], 90.00th=[10805], 95.00th=[10939], 00:28:06.811 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:28:06.811 | 99.99th=[10939] 00:28:06.811 bw ( KiB/s): min= 1812, max=239616, per=2.36%, avg=70174.00, stdev=87930.10, samples=7 00:28:06.811 iops : min= 1, max= 234, avg=68.29, stdev=86.04, samples=7 00:28:06.811 lat (msec) : 500=10.35%, 750=40.60%, 2000=0.27%, >=2000=48.77% 00:28:06.811 cpu : usr=0.03%, sys=1.31%, ctx=325, majf=0, minf=32769 00:28:06.811 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.7%, >=64=82.8% 00:28:06.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.811 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:06.811 issued rwts: total=367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.811 job4: (groupid=0, jobs=1): err= 0: pid=1796767: Thu Nov 28 11:00:43 2024 00:28:06.811 read: IOPS=3, BW=3984KiB/s (4080kB/s)(47.0MiB/12079msec) 00:28:06.811 slat (usec): min=944, max=2077.3k, avg=213104.40, stdev=599164.41 00:28:06.811 clat (msec): min=2062, max=12036, avg=9369.16, stdev=3169.96 00:28:06.811 lat (msec): min=2079, max=12078, avg=9582.27, stdev=3000.20 00:28:06.811 clat percentiles (msec): 00:28:06.811 | 1.00th=[ 2056], 5.00th=[ 2106], 10.00th=[ 4245], 20.00th=[ 6409], 00:28:06.811 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:28:06.811 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:28:06.811 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.811 | 99.99th=[12013] 00:28:06.811 lat (msec) : >=2000=100.00% 00:28:06.811 cpu : usr=0.01%, sys=0.35%, ctx=81, majf=0, minf=12033 00:28:06.811 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:28:06.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.811 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.811 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.811 job4: (groupid=0, jobs=1): err= 0: pid=1796768: Thu Nov 28 11:00:43 2024 00:28:06.811 read: IOPS=2, BW=2714KiB/s (2780kB/s)(32.0MiB/12072msec) 00:28:06.811 slat (msec): min=3, max=2078, avg=374.67, stdev=753.13 00:28:06.811 clat (msec): min=82, max=12061, avg=6139.88, stdev=3929.57 00:28:06.811 lat (msec): min=2080, max=12071, avg=6514.55, stdev=3904.87 00:28:06.811 clat percentiles (msec): 00:28:06.811 | 1.00th=[ 83], 5.00th=[ 2089], 10.00th=[ 2089], 20.00th=[ 2123], 00:28:06.811 | 30.00th=[ 2140], 40.00th=[ 4178], 50.00th=[ 4329], 60.00th=[ 8557], 00:28:06.811 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[11879], 95.00th=[12013], 00:28:06.811 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.811 | 99.99th=[12013] 00:28:06.811 lat (msec) : 100=3.12%, >=2000=96.88% 00:28:06.811 cpu : usr=0.00%, sys=0.20%, ctx=93, majf=0, minf=8193 00:28:06.811 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:28:06.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.811 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:06.811 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.811 job4: (groupid=0, jobs=1): err= 0: pid=1796769: Thu Nov 28 11:00:43 2024 00:28:06.811 read: IOPS=4, BW=5001KiB/s (5121kB/s)(59.0MiB/12080msec) 00:28:06.811 slat (usec): min=1167, max=2089.7k, avg=169780.14, stdev=531809.64 00:28:06.811 clat (msec): min=2062, max=12076, avg=9690.20, stdev=3062.98 00:28:06.811 lat (msec): min=4066, max=12079, avg=9859.98, stdev=2906.49 00:28:06.811 clat percentiles (msec): 00:28:06.811 | 1.00th=[ 2056], 5.00th=[ 4077], 10.00th=[ 4178], 20.00th=[ 6342], 00:28:06.811 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[11879], 60.00th=[12013], 00:28:06.811 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:28:06.811 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.811 | 99.99th=[12013] 00:28:06.811 lat (msec) : >=2000=100.00% 00:28:06.811 cpu : usr=0.00%, sys=0.55%, ctx=133, majf=0, minf=15105 00:28:06.811 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:28:06.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.811 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.811 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.811 job4: (groupid=0, jobs=1): err= 0: pid=1796770: Thu Nov 28 11:00:43 2024 00:28:06.811 read: IOPS=151, BW=151MiB/s (159MB/s)(1528MiB/10099msec) 00:28:06.811 slat (usec): min=57, max=2064.2k, avg=6539.49, stdev=73837.89 00:28:06.811 clat (msec): min=98, max=4769, avg=813.34, stdev=1068.51 00:28:06.811 lat (msec): min=103, max=4774, avg=819.88, stdev=1072.24 00:28:06.811 clat percentiles (msec): 00:28:06.811 | 1.00th=[ 426], 5.00th=[ 430], 10.00th=[ 430], 20.00th=[ 435], 00:28:06.811 | 30.00th=[ 439], 40.00th=[ 443], 50.00th=[ 456], 60.00th=[ 472], 00:28:06.811 | 70.00th=[ 523], 80.00th=[ 535], 90.00th=[ 2265], 95.00th=[ 4530], 00:28:06.811 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:28:06.811 | 99.99th=[ 4799] 00:28:06.811 bw ( KiB/s): min=16384, max=307200, per=7.41%, avg=220711.38, stdev=102642.58, samples=13 00:28:06.811 iops : min= 16, max= 300, avg=215.54, stdev=100.24, samples=13 00:28:06.811 lat (msec) : 100=0.07%, 250=0.46%, 500=64.66%, 750=24.35%, 2000=0.33% 00:28:06.811 lat (msec) : >=2000=10.14% 00:28:06.811 cpu : usr=0.04%, sys=2.57%, ctx=1310, majf=0, minf=32769 00:28:06.811 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:28:06.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.811 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.811 issued rwts: total=1528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.811 job4: (groupid=0, jobs=1): err= 0: pid=1796771: Thu Nov 28 11:00:43 2024 00:28:06.811 read: IOPS=5, BW=5744KiB/s (5882kB/s)(68.0MiB/12122msec) 00:28:06.811 slat (usec): min=767, max=2101.9k, avg=147146.43, stdev=505601.02 00:28:06.811 clat (msec): min=2115, max=12119, avg=10332.13, stdev=2956.77 00:28:06.811 lat (msec): min=2127, max=12121, avg=10479.28, stdev=2785.77 00:28:06.811 clat percentiles (msec): 00:28:06.811 | 1.00th=[ 2123], 5.00th=[ 2140], 10.00th=[ 4279], 20.00th=[ 8557], 00:28:06.811 | 30.00th=[10671], 40.00th=[10805], 50.00th=[12013], 60.00th=[12013], 00:28:06.811 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:06.811 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.811 | 99.99th=[12147] 00:28:06.811 lat (msec) : >=2000=100.00% 00:28:06.811 cpu : usr=0.01%, sys=0.60%, ctx=120, majf=0, minf=17409 00:28:06.811 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:28:06.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.811 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.812 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.812 job4: (groupid=0, jobs=1): err= 0: pid=1796772: Thu Nov 28 11:00:43 2024 00:28:06.812 read: IOPS=202, BW=203MiB/s (213MB/s)(2447MiB/12056msec) 00:28:06.812 slat (usec): min=40, max=2068.3k, avg=4881.99, stdev=78509.71 00:28:06.812 clat (msec): min=102, max=3744, avg=446.65, stdev=861.11 00:28:06.812 lat (msec): min=134, max=3751, avg=451.54, stdev=866.57 00:28:06.812 clat percentiles (msec): 00:28:06.812 | 1.00th=[ 136], 5.00th=[ 138], 10.00th=[ 138], 20.00th=[ 140], 00:28:06.812 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 142], 60.00th=[ 144], 00:28:06.812 | 70.00th=[ 144], 80.00th=[ 163], 90.00th=[ 2299], 95.00th=[ 2534], 00:28:06.812 | 99.00th=[ 3473], 99.50th=[ 3473], 99.90th=[ 3675], 99.95th=[ 3708], 00:28:06.812 | 99.99th=[ 3742] 00:28:06.812 bw ( KiB/s): min=12374, max=946331, per=17.69%, avg=527045.44, stdev=394543.01, samples=9 00:28:06.812 iops : min= 12, max= 924, avg=514.67, stdev=385.29, samples=9 00:28:06.812 lat (msec) : 250=84.43%, 500=4.58%, 2000=0.33%, >=2000=10.67% 00:28:06.812 cpu : usr=0.11%, sys=1.57%, ctx=2513, majf=0, minf=32769 00:28:06.812 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:28:06.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.812 issued rwts: total=2447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.812 job4: (groupid=0, jobs=1): err= 0: pid=1796773: Thu Nov 28 11:00:43 2024 00:28:06.812 read: IOPS=9, BW=9890KiB/s (10.1MB/s)(116MiB/12010msec) 00:28:06.812 slat (usec): min=399, max=2044.8k, avg=86320.74, stdev=380496.55 00:28:06.812 clat (msec): min=1995, max=12007, avg=6691.21, stdev=3486.88 00:28:06.812 lat (msec): min=2125, max=12009, avg=6777.53, stdev=3493.57 00:28:06.812 clat percentiles (msec): 00:28:06.812 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 4111], 00:28:06.812 | 30.00th=[ 4144], 40.00th=[ 4245], 50.00th=[ 6342], 60.00th=[ 6477], 00:28:06.812 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[12013], 95.00th=[12013], 00:28:06.812 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:06.812 | 99.99th=[12013] 00:28:06.812 lat (msec) : 2000=0.86%, >=2000=99.14% 00:28:06.812 cpu : usr=0.00%, sys=0.87%, ctx=104, majf=0, minf=29697 00:28:06.812 IO depths : 1=0.9%, 2=1.7%, 4=3.4%, 8=6.9%, 16=13.8%, 32=27.6%, >=64=45.7% 00:28:06.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.812 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.812 issued rwts: total=116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.812 job4: (groupid=0, jobs=1): err= 0: pid=1796774: Thu Nov 28 11:00:43 2024 00:28:06.812 read: IOPS=3, BW=3812KiB/s (3903kB/s)(45.0MiB/12089msec) 00:28:06.812 slat (usec): min=639, max=2151.8k, avg=222514.39, stdev=615100.87 00:28:06.812 clat (msec): min=2074, max=12086, avg=8519.43, stdev=3864.91 00:28:06.812 lat (msec): min=2136, max=12088, avg=8741.95, stdev=3772.59 00:28:06.812 clat percentiles (msec): 00:28:06.812 | 1.00th=[ 2072], 5.00th=[ 4144], 10.00th=[ 4144], 20.00th=[ 4144], 00:28:06.812 | 30.00th=[ 4212], 40.00th=[ 6342], 50.00th=[11879], 60.00th=[11879], 00:28:06.812 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12147], 00:28:06.812 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:06.812 | 99.99th=[12147] 00:28:06.812 lat (msec) : >=2000=100.00% 00:28:06.812 cpu : usr=0.00%, sys=0.41%, ctx=97, majf=0, minf=11521 00:28:06.812 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:28:06.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.812 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.812 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.812 job4: (groupid=0, jobs=1): err= 0: pid=1796775: Thu Nov 28 11:00:43 2024 00:28:06.812 read: IOPS=149, BW=149MiB/s (157MB/s)(1507MiB/10094msec) 00:28:06.812 slat (usec): min=45, max=2034.3k, avg=6653.26, stdev=52784.57 00:28:06.812 clat (msec): min=51, max=3031, avg=813.67, stdev=632.11 00:28:06.812 lat (msec): min=153, max=3051, avg=820.32, stdev=634.56 00:28:06.812 clat percentiles (msec): 00:28:06.812 | 1.00th=[ 203], 5.00th=[ 542], 10.00th=[ 584], 20.00th=[ 584], 00:28:06.812 | 30.00th=[ 592], 40.00th=[ 600], 50.00th=[ 609], 60.00th=[ 659], 00:28:06.812 | 70.00th=[ 667], 80.00th=[ 701], 90.00th=[ 877], 95.00th=[ 2869], 00:28:06.812 | 99.00th=[ 3004], 99.50th=[ 3004], 99.90th=[ 3004], 99.95th=[ 3037], 00:28:06.812 | 99.99th=[ 3037] 00:28:06.812 bw ( KiB/s): min=63488, max=223232, per=5.92%, avg=176512.00, stdev=54361.29, samples=16 00:28:06.812 iops : min= 62, max= 218, avg=172.38, stdev=53.09, samples=16 00:28:06.812 lat (msec) : 100=0.07%, 250=1.13%, 500=3.12%, 750=79.56%, 1000=7.70% 00:28:06.812 lat (msec) : >=2000=8.43% 00:28:06.812 cpu : usr=0.12%, sys=3.26%, ctx=1272, majf=0, minf=32769 00:28:06.812 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:28:06.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.812 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.812 issued rwts: total=1507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.812 job4: (groupid=0, jobs=1): err= 0: pid=1796776: Thu Nov 28 11:00:43 2024 00:28:06.812 read: IOPS=16, BW=16.9MiB/s (17.8MB/s)(207MiB/12220msec) 00:28:06.812 slat (usec): min=76, max=4071.5k, avg=58536.05, stdev=377085.36 00:28:06.812 clat (msec): min=101, max=11788, avg=7312.31, stdev=5013.12 00:28:06.812 lat (msec): min=593, max=11801, avg=7370.84, stdev=4994.86 00:28:06.812 clat percentiles (msec): 00:28:06.812 | 1.00th=[ 592], 5.00th=[ 642], 10.00th=[ 676], 20.00th=[ 718], 00:28:06.812 | 30.00th=[ 751], 40.00th=[ 7483], 50.00th=[11342], 60.00th=[11476], 00:28:06.812 | 70.00th=[11476], 80.00th=[11610], 90.00th=[11745], 95.00th=[11745], 00:28:06.812 | 99.00th=[11745], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:28:06.812 | 99.99th=[11745] 00:28:06.812 bw ( KiB/s): min= 2048, max=133120, per=0.90%, avg=26963.83, stdev=52076.67, samples=6 00:28:06.812 iops : min= 2, max= 130, avg=26.17, stdev=50.94, samples=6 00:28:06.812 lat (msec) : 250=0.48%, 750=29.47%, 1000=1.93%, 2000=1.45%, >=2000=66.67% 00:28:06.812 cpu : usr=0.00%, sys=0.85%, ctx=247, majf=0, minf=32331 00:28:06.812 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.9%, 16=7.7%, 32=15.5%, >=64=69.6% 00:28:06.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.812 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:28:06.812 issued rwts: total=207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.812 job4: (groupid=0, jobs=1): err= 0: pid=1796777: Thu Nov 28 11:00:43 2024 00:28:06.812 read: IOPS=4, BW=4865KiB/s (4982kB/s)(57.0MiB/11998msec) 00:28:06.812 slat (usec): min=899, max=2076.2k, avg=209613.27, stdev=586419.91 00:28:06.812 clat (msec): min=49, max=11936, avg=7356.23, stdev=4164.48 00:28:06.812 lat (msec): min=2006, max=11997, avg=7565.84, stdev=4090.17 00:28:06.812 clat percentiles (msec): 00:28:06.812 | 1.00th=[ 50], 5.00th=[ 2140], 10.00th=[ 2140], 20.00th=[ 2140], 00:28:06.812 | 30.00th=[ 2165], 40.00th=[ 6342], 50.00th=[10537], 60.00th=[10805], 00:28:06.812 | 70.00th=[10805], 80.00th=[10805], 90.00th=[11879], 95.00th=[11879], 00:28:06.812 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:28:06.812 | 99.99th=[11879] 00:28:06.812 lat (msec) : 50=1.75%, >=2000=98.25% 00:28:06.812 cpu : usr=0.00%, sys=0.42%, ctx=78, majf=0, minf=14593 00:28:06.812 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:28:06.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.812 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.812 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.812 job5: (groupid=0, jobs=1): err= 0: pid=1796778: Thu Nov 28 11:00:43 2024 00:28:06.812 read: IOPS=156, BW=156MiB/s (164MB/s)(1567MiB/10028msec) 00:28:06.812 slat (usec): min=59, max=2087.9k, avg=6378.88, stdev=60369.86 00:28:06.812 clat (msec): min=24, max=2713, avg=707.30, stdev=612.93 00:28:06.812 lat (msec): min=27, max=2714, avg=713.68, stdev=615.55 00:28:06.812 clat percentiles (msec): 00:28:06.812 | 1.00th=[ 159], 5.00th=[ 279], 10.00th=[ 321], 20.00th=[ 409], 00:28:06.812 | 30.00th=[ 414], 40.00th=[ 485], 50.00th=[ 550], 60.00th=[ 558], 00:28:06.812 | 70.00th=[ 667], 80.00th=[ 751], 90.00th=[ 885], 95.00th=[ 2635], 00:28:06.812 | 99.00th=[ 2702], 99.50th=[ 2702], 99.90th=[ 2702], 99.95th=[ 2702], 00:28:06.812 | 99.99th=[ 2702] 00:28:06.812 bw ( KiB/s): min=114688, max=421888, per=7.61%, avg=226759.15, stdev=83296.83, samples=13 00:28:06.812 iops : min= 112, max= 412, avg=221.31, stdev=81.45, samples=13 00:28:06.812 lat (msec) : 50=0.26%, 250=1.60%, 500=40.52%, 750=37.65%, 1000=10.66% 00:28:06.812 lat (msec) : 2000=1.21%, >=2000=8.10% 00:28:06.812 cpu : usr=0.04%, sys=2.87%, ctx=1319, majf=0, minf=32769 00:28:06.812 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:28:06.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.812 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.812 issued rwts: total=1567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.812 job5: (groupid=0, jobs=1): err= 0: pid=1796779: Thu Nov 28 11:00:43 2024 00:28:06.812 read: IOPS=152, BW=152MiB/s (160MB/s)(1838MiB/12079msec) 00:28:06.812 slat (usec): min=39, max=2064.0k, avg=5448.33, stdev=87516.68 00:28:06.812 clat (msec): min=119, max=8139, avg=447.89, stdev=1186.37 00:28:06.812 lat (msec): min=120, max=8141, avg=453.34, stdev=1200.04 00:28:06.812 clat percentiles (msec): 00:28:06.812 | 1.00th=[ 121], 5.00th=[ 122], 10.00th=[ 122], 20.00th=[ 123], 00:28:06.812 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 133], 00:28:06.812 | 70.00th=[ 136], 80.00th=[ 136], 90.00th=[ 313], 95.00th=[ 2232], 00:28:06.812 | 99.00th=[ 8087], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:28:06.812 | 99.99th=[ 8154] 00:28:06.812 bw ( KiB/s): min= 1910, max=1052672, per=23.52%, avg=700798.00, stdev=432546.12, samples=5 00:28:06.812 iops : min= 1, max= 1028, avg=684.20, stdev=422.76, samples=5 00:28:06.812 lat (msec) : 250=88.68%, 500=1.85%, >=2000=9.47% 00:28:06.812 cpu : usr=0.02%, sys=1.86%, ctx=1867, majf=0, minf=32769 00:28:06.812 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:28:06.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.813 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.813 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.813 job5: (groupid=0, jobs=1): err= 0: pid=1796780: Thu Nov 28 11:00:43 2024 00:28:06.813 read: IOPS=14, BW=14.3MiB/s (15.0MB/s)(173MiB/12072msec) 00:28:06.813 slat (usec): min=61, max=2133.9k, avg=69117.25, stdev=342687.06 00:28:06.813 clat (msec): min=113, max=8120, avg=3353.97, stdev=1487.63 00:28:06.813 lat (msec): min=297, max=8132, avg=3423.09, stdev=1493.01 00:28:06.813 clat percentiles (msec): 00:28:06.813 | 1.00th=[ 296], 5.00th=[ 309], 10.00th=[ 485], 20.00th=[ 2005], 00:28:06.813 | 30.00th=[ 2265], 40.00th=[ 3910], 50.00th=[ 3943], 60.00th=[ 3977], 00:28:06.813 | 70.00th=[ 4010], 80.00th=[ 4077], 90.00th=[ 4111], 95.00th=[ 4665], 00:28:06.813 | 99.00th=[ 8087], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:28:06.813 | 99.99th=[ 8154] 00:28:06.813 bw ( KiB/s): min=34335, max=57229, per=1.54%, avg=45782.00, stdev=16188.50, samples=2 00:28:06.813 iops : min= 33, max= 55, avg=44.00, stdev=15.56, samples=2 00:28:06.813 lat (msec) : 250=0.58%, 500=9.83%, 2000=9.25%, >=2000=80.35% 00:28:06.813 cpu : usr=0.00%, sys=0.60%, ctx=380, majf=0, minf=32769 00:28:06.813 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.5%, >=64=63.6% 00:28:06.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.813 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:28:06.813 issued rwts: total=173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.813 job5: (groupid=0, jobs=1): err= 0: pid=1796781: Thu Nov 28 11:00:43 2024 00:28:06.813 read: IOPS=26, BW=26.1MiB/s (27.3MB/s)(315MiB/12090msec) 00:28:06.813 slat (usec): min=39, max=2035.9k, avg=38025.18, stdev=191622.21 00:28:06.813 clat (msec): min=109, max=5461, avg=3946.60, stdev=1115.74 00:28:06.813 lat (msec): min=1163, max=5469, avg=3984.63, stdev=1080.33 00:28:06.813 clat percentiles (msec): 00:28:06.813 | 1.00th=[ 1955], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 3205], 00:28:06.813 | 30.00th=[ 3440], 40.00th=[ 3608], 50.00th=[ 3977], 60.00th=[ 4279], 00:28:06.813 | 70.00th=[ 4799], 80.00th=[ 5201], 90.00th=[ 5269], 95.00th=[ 5336], 00:28:06.813 | 99.00th=[ 5403], 99.50th=[ 5470], 99.90th=[ 5470], 99.95th=[ 5470], 00:28:06.813 | 99.99th=[ 5470] 00:28:06.813 bw ( KiB/s): min= 6023, max=106496, per=1.61%, avg=47840.00, stdev=37260.25, samples=8 00:28:06.813 iops : min= 5, max= 104, avg=46.50, stdev=36.47, samples=8 00:28:06.813 lat (msec) : 250=0.32%, 2000=1.59%, >=2000=98.10% 00:28:06.813 cpu : usr=0.03%, sys=0.75%, ctx=1046, majf=0, minf=32769 00:28:06.813 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.2%, >=64=80.0% 00:28:06.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.813 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:28:06.813 issued rwts: total=315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.813 job5: (groupid=0, jobs=1): err= 0: pid=1796782: Thu Nov 28 11:00:43 2024 00:28:06.813 read: IOPS=13, BW=13.9MiB/s (14.6MB/s)(168MiB/12076msec) 00:28:06.813 slat (usec): min=524, max=2133.9k, avg=71222.67, stdev=327389.52 00:28:06.813 clat (msec): min=109, max=9341, avg=3150.92, stdev=1425.16 00:28:06.813 lat (msec): min=1784, max=9370, avg=3222.14, stdev=1483.75 00:28:06.813 clat percentiles (msec): 00:28:06.813 | 1.00th=[ 1754], 5.00th=[ 1804], 10.00th=[ 1838], 20.00th=[ 2165], 00:28:06.813 | 30.00th=[ 2433], 40.00th=[ 2702], 50.00th=[ 2937], 60.00th=[ 3205], 00:28:06.813 | 70.00th=[ 3473], 80.00th=[ 3742], 90.00th=[ 3977], 95.00th=[ 6007], 00:28:06.813 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:28:06.813 | 99.99th=[ 9329] 00:28:06.813 bw ( KiB/s): min=20439, max=59883, per=1.35%, avg=40161.00, stdev=27891.12, samples=2 00:28:06.813 iops : min= 19, max= 58, avg=38.50, stdev=27.58, samples=2 00:28:06.813 lat (msec) : 250=0.60%, 2000=17.26%, >=2000=82.14% 00:28:06.813 cpu : usr=0.01%, sys=0.79%, ctx=453, majf=0, minf=32769 00:28:06.813 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.5%, 32=19.0%, >=64=62.5% 00:28:06.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.813 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:28:06.813 issued rwts: total=168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.813 job5: (groupid=0, jobs=1): err= 0: pid=1796783: Thu Nov 28 11:00:43 2024 00:28:06.813 read: IOPS=103, BW=103MiB/s (108MB/s)(1034MiB/10026msec) 00:28:06.813 slat (usec): min=48, max=1993.1k, avg=9680.74, stdev=72093.29 00:28:06.813 clat (msec): min=12, max=2687, avg=1033.25, stdev=616.71 00:28:06.813 lat (msec): min=78, max=2694, avg=1042.93, stdev=618.30 00:28:06.813 clat percentiles (msec): 00:28:06.813 | 1.00th=[ 134], 5.00th=[ 542], 10.00th=[ 609], 20.00th=[ 667], 00:28:06.813 | 30.00th=[ 701], 40.00th=[ 735], 50.00th=[ 776], 60.00th=[ 793], 00:28:06.813 | 70.00th=[ 1083], 80.00th=[ 1250], 90.00th=[ 2400], 95.00th=[ 2534], 00:28:06.813 | 99.00th=[ 2635], 99.50th=[ 2668], 99.90th=[ 2668], 99.95th=[ 2702], 00:28:06.813 | 99.99th=[ 2702] 00:28:06.813 bw ( KiB/s): min=24576, max=245760, per=4.79%, avg=142700.54, stdev=62629.21, samples=13 00:28:06.813 iops : min= 24, max= 240, avg=139.31, stdev=61.12, samples=13 00:28:06.813 lat (msec) : 20=0.10%, 100=0.29%, 250=0.87%, 750=42.17%, 1000=24.95% 00:28:06.813 lat (msec) : 2000=17.70%, >=2000=13.93% 00:28:06.813 cpu : usr=0.05%, sys=1.34%, ctx=2221, majf=0, minf=32769 00:28:06.813 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:28:06.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.813 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.813 issued rwts: total=1034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.813 job5: (groupid=0, jobs=1): err= 0: pid=1796784: Thu Nov 28 11:00:43 2024 00:28:06.813 read: IOPS=43, BW=43.2MiB/s (45.3MB/s)(522MiB/12092msec) 00:28:06.813 slat (usec): min=471, max=2106.7k, avg=22971.10, stdev=137080.65 00:28:06.813 clat (msec): min=98, max=4905, avg=2358.79, stdev=1462.21 00:28:06.813 lat (msec): min=573, max=4912, avg=2381.76, stdev=1460.68 00:28:06.813 clat percentiles (msec): 00:28:06.813 | 1.00th=[ 584], 5.00th=[ 684], 10.00th=[ 768], 20.00th=[ 927], 00:28:06.813 | 30.00th=[ 961], 40.00th=[ 1670], 50.00th=[ 2333], 60.00th=[ 2601], 00:28:06.813 | 70.00th=[ 2903], 80.00th=[ 4463], 90.00th=[ 4597], 95.00th=[ 4732], 00:28:06.813 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:28:06.813 | 99.99th=[ 4933] 00:28:06.813 bw ( KiB/s): min= 4096, max=172032, per=2.71%, avg=80638.80, stdev=58470.03, samples=10 00:28:06.813 iops : min= 4, max= 168, avg=78.60, stdev=56.96, samples=10 00:28:06.813 lat (msec) : 100=0.19%, 750=9.00%, 1000=27.01%, 2000=9.39%, >=2000=54.41% 00:28:06.813 cpu : usr=0.00%, sys=1.11%, ctx=2201, majf=0, minf=32769 00:28:06.813 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=87.9% 00:28:06.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.813 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.813 issued rwts: total=522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.813 job5: (groupid=0, jobs=1): err= 0: pid=1796785: Thu Nov 28 11:00:43 2024 00:28:06.813 read: IOPS=43, BW=43.5MiB/s (45.6MB/s)(524MiB/12048msec) 00:28:06.813 slat (usec): min=1925, max=2007.3k, avg=22799.83, stdev=133723.04 00:28:06.813 clat (msec): min=98, max=4856, avg=2331.99, stdev=1441.54 00:28:06.813 lat (msec): min=570, max=4863, avg=2354.79, stdev=1440.39 00:28:06.813 clat percentiles (msec): 00:28:06.813 | 1.00th=[ 584], 5.00th=[ 667], 10.00th=[ 760], 20.00th=[ 927], 00:28:06.813 | 30.00th=[ 969], 40.00th=[ 1653], 50.00th=[ 2333], 60.00th=[ 2635], 00:28:06.813 | 70.00th=[ 2836], 80.00th=[ 4396], 90.00th=[ 4597], 95.00th=[ 4665], 00:28:06.813 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:28:06.813 | 99.99th=[ 4866] 00:28:06.813 bw ( KiB/s): min=10730, max=188416, per=2.71%, avg=80674.60, stdev=59656.23, samples=10 00:28:06.813 iops : min= 10, max= 184, avg=78.50, stdev=58.20, samples=10 00:28:06.813 lat (msec) : 100=0.19%, 750=9.35%, 1000=27.10%, 2000=9.16%, >=2000=54.20% 00:28:06.813 cpu : usr=0.02%, sys=1.10%, ctx=2215, majf=0, minf=32769 00:28:06.813 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=88.0% 00:28:06.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.813 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.813 issued rwts: total=524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.813 job5: (groupid=0, jobs=1): err= 0: pid=1796786: Thu Nov 28 11:00:43 2024 00:28:06.813 read: IOPS=66, BW=66.2MiB/s (69.4MB/s)(672MiB/10156msec) 00:28:06.813 slat (usec): min=55, max=2080.4k, avg=14903.43, stdev=144523.98 00:28:06.813 clat (msec): min=134, max=5912, avg=1548.90, stdev=1777.70 00:28:06.813 lat (msec): min=162, max=5914, avg=1563.80, stdev=1784.29 00:28:06.813 clat percentiles (msec): 00:28:06.813 | 1.00th=[ 409], 5.00th=[ 409], 10.00th=[ 414], 20.00th=[ 414], 00:28:06.813 | 30.00th=[ 418], 40.00th=[ 422], 50.00th=[ 550], 60.00th=[ 617], 00:28:06.813 | 70.00th=[ 2400], 80.00th=[ 2567], 90.00th=[ 5873], 95.00th=[ 5873], 00:28:06.813 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:28:06.813 | 99.99th=[ 5940] 00:28:06.813 bw ( KiB/s): min= 8192, max=313344, per=6.23%, avg=185581.00, stdev=120339.15, samples=6 00:28:06.813 iops : min= 8, max= 306, avg=181.17, stdev=117.44, samples=6 00:28:06.813 lat (msec) : 250=0.74%, 500=45.98%, 750=20.09%, >=2000=33.18% 00:28:06.813 cpu : usr=0.11%, sys=2.03%, ctx=652, majf=0, minf=32769 00:28:06.813 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:28:06.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.814 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.814 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.814 job5: (groupid=0, jobs=1): err= 0: pid=1796787: Thu Nov 28 11:00:43 2024 00:28:06.814 read: IOPS=90, BW=91.0MiB/s (95.4MB/s)(919MiB/10103msec) 00:28:06.814 slat (usec): min=128, max=107971, avg=10900.09, stdev=15349.94 00:28:06.814 clat (msec): min=77, max=2352, avg=1298.90, stdev=340.12 00:28:06.814 lat (msec): min=119, max=2360, avg=1309.80, stdev=340.11 00:28:06.814 clat percentiles (msec): 00:28:06.814 | 1.00th=[ 305], 5.00th=[ 902], 10.00th=[ 1028], 20.00th=[ 1099], 00:28:06.814 | 30.00th=[ 1150], 40.00th=[ 1200], 50.00th=[ 1250], 60.00th=[ 1301], 00:28:06.814 | 70.00th=[ 1334], 80.00th=[ 1401], 90.00th=[ 1854], 95.00th=[ 2056], 00:28:06.814 | 99.00th=[ 2198], 99.50th=[ 2232], 99.90th=[ 2366], 99.95th=[ 2366], 00:28:06.814 | 99.99th=[ 2366] 00:28:06.814 bw ( KiB/s): min=49152, max=124928, per=3.02%, avg=89988.44, stdev=24400.26, samples=18 00:28:06.814 iops : min= 48, max= 122, avg=87.83, stdev=23.83, samples=18 00:28:06.814 lat (msec) : 100=0.11%, 250=0.65%, 500=1.41%, 750=1.74%, 1000=3.26% 00:28:06.814 lat (msec) : 2000=85.64%, >=2000=7.18% 00:28:06.814 cpu : usr=0.11%, sys=2.20%, ctx=2015, majf=0, minf=32769 00:28:06.814 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:28:06.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.814 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.814 issued rwts: total=919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.814 job5: (groupid=0, jobs=1): err= 0: pid=1796788: Thu Nov 28 11:00:43 2024 00:28:06.814 read: IOPS=277, BW=277MiB/s (291MB/s)(2786MiB/10054msec) 00:28:06.814 slat (usec): min=36, max=2117.9k, avg=3585.66, stdev=59335.55 00:28:06.814 clat (msec): min=51, max=2528, avg=408.58, stdev=661.69 00:28:06.814 lat (msec): min=58, max=2530, avg=412.16, stdev=664.21 00:28:06.814 clat percentiles (msec): 00:28:06.814 | 1.00th=[ 136], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 142], 00:28:06.814 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 144], 60.00th=[ 148], 00:28:06.814 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 1418], 95.00th=[ 2433], 00:28:06.814 | 99.00th=[ 2500], 99.50th=[ 2500], 99.90th=[ 2534], 99.95th=[ 2534], 00:28:06.814 | 99.99th=[ 2534] 00:28:06.814 bw ( KiB/s): min=47104, max=925696, per=18.26%, avg=544155.10, stdev=298917.66, samples=10 00:28:06.814 iops : min= 46, max= 904, avg=531.30, stdev=291.85, samples=10 00:28:06.814 lat (msec) : 100=0.32%, 250=60.77%, 500=28.68%, 2000=1.11%, >=2000=9.12% 00:28:06.814 cpu : usr=0.10%, sys=3.19%, ctx=2483, majf=0, minf=32769 00:28:06.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:28:06.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.814 issued rwts: total=2786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.814 job5: (groupid=0, jobs=1): err= 0: pid=1796789: Thu Nov 28 11:00:43 2024 00:28:06.814 read: IOPS=156, BW=156MiB/s (164MB/s)(1569MiB/10040msec) 00:28:06.814 slat (usec): min=41, max=2036.5k, avg=6372.40, stdev=59169.95 00:28:06.814 clat (msec): min=27, max=2720, avg=715.80, stdev=620.33 00:28:06.814 lat (msec): min=130, max=2722, avg=722.18, stdev=622.87 00:28:06.814 clat percentiles (msec): 00:28:06.814 | 1.00th=[ 161], 5.00th=[ 279], 10.00th=[ 321], 20.00th=[ 409], 00:28:06.814 | 30.00th=[ 414], 40.00th=[ 485], 50.00th=[ 550], 60.00th=[ 575], 00:28:06.814 | 70.00th=[ 667], 80.00th=[ 760], 90.00th=[ 944], 95.00th=[ 2635], 00:28:06.814 | 99.00th=[ 2702], 99.50th=[ 2702], 99.90th=[ 2735], 99.95th=[ 2735], 00:28:06.814 | 99.99th=[ 2735] 00:28:06.814 bw ( KiB/s): min= 2048, max=430080, per=7.07%, avg=210743.00, stdev=101304.03, samples=14 00:28:06.814 iops : min= 2, max= 420, avg=205.71, stdev=98.95, samples=14 00:28:06.814 lat (msec) : 50=0.06%, 250=1.72%, 500=40.41%, 750=37.03%, 1000=10.83% 00:28:06.814 lat (msec) : 2000=1.85%, >=2000=8.09% 00:28:06.814 cpu : usr=0.16%, sys=3.05%, ctx=1310, majf=0, minf=32769 00:28:06.814 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:28:06.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.814 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.814 issued rwts: total=1569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.814 job5: (groupid=0, jobs=1): err= 0: pid=1796790: Thu Nov 28 11:00:43 2024 00:28:06.814 read: IOPS=104, BW=104MiB/s (109MB/s)(1053MiB/10101msec) 00:28:06.814 slat (usec): min=59, max=108022, avg=9511.27, stdev=14176.49 00:28:06.814 clat (msec): min=77, max=2174, avg=1126.83, stdev=354.24 00:28:06.814 lat (msec): min=117, max=2179, avg=1136.34, stdev=354.68 00:28:06.814 clat percentiles (msec): 00:28:06.814 | 1.00th=[ 296], 5.00th=[ 651], 10.00th=[ 676], 20.00th=[ 818], 00:28:06.814 | 30.00th=[ 919], 40.00th=[ 1083], 50.00th=[ 1150], 60.00th=[ 1200], 00:28:06.814 | 70.00th=[ 1234], 80.00th=[ 1351], 90.00th=[ 1569], 95.00th=[ 1821], 00:28:06.814 | 99.00th=[ 2056], 99.50th=[ 2072], 99.90th=[ 2123], 99.95th=[ 2165], 00:28:06.814 | 99.99th=[ 2165] 00:28:06.814 bw ( KiB/s): min=51200, max=198656, per=3.74%, avg=111435.29, stdev=40632.67, samples=17 00:28:06.814 iops : min= 50, max= 194, avg=108.82, stdev=39.68, samples=17 00:28:06.814 lat (msec) : 100=0.09%, 250=0.66%, 500=1.42%, 750=12.35%, 1000=22.70% 00:28:06.814 lat (msec) : 2000=60.87%, >=2000=1.90% 00:28:06.814 cpu : usr=0.13%, sys=2.25%, ctx=2015, majf=0, minf=32769 00:28:06.814 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:28:06.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.814 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.814 issued rwts: total=1053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.814 00:28:06.814 Run status group 0 (all jobs): 00:28:06.814 READ: bw=2910MiB/s (3051MB/s), 1608KiB/s-285MiB/s (1647kB/s-299MB/s), io=34.7GiB (37.3GB), run=10026-12220msec 00:28:06.814 00:28:06.814 Disk stats (read/write): 00:28:06.814 nvme0n1: ios=38175/0, merge=0/0, ticks=7614618/0, in_queue=7614618, util=98.31% 00:28:06.814 nvme1n1: ios=38721/0, merge=0/0, ticks=8883759/0, in_queue=8883759, util=98.46% 00:28:06.814 nvme2n1: ios=34517/0, merge=0/0, ticks=7603557/0, in_queue=7603557, util=98.78% 00:28:06.814 nvme3n1: ios=12572/0, merge=0/0, ticks=7275559/0, in_queue=7275559, util=98.85% 00:28:06.814 nvme4n1: ios=52907/0, merge=0/0, ticks=8709133/0, in_queue=8709133, util=99.16% 00:28:06.814 nvme5n1: ios=104854/0, merge=0/0, ticks=8491784/0, in_queue=8491784, util=99.15% 00:28:06.814 11:00:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:28:06.814 11:00:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:28:06.814 11:00:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:06.814 11:00:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:28:06.814 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:06.814 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:06.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:06.814 11:00:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:07.753 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:07.753 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:28:07.753 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:07.753 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:07.753 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:28:08.012 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:08.012 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:28:08.012 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:08.012 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:08.012 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.012 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:08.012 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.013 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:08.013 11:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:08.951 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:08.951 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:09.890 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:09.890 11:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:10.829 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:10.829 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:10.829 rmmod nvme_rdma 00:28:11.088 rmmod nvme_fabrics 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 1795107 ']' 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 1795107 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 1795107 ']' 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 1795107 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795107 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795107' 00:28:11.088 killing process with pid 1795107 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 1795107 00:28:11.088 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 1795107 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:13.627 00:28:13.627 real 0m38.261s 00:28:13.627 user 2m7.139s 00:28:13.627 sys 0m18.179s 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:13.627 ************************************ 00:28:13.627 END TEST nvmf_srq_overwhelm 00:28:13.627 ************************************ 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:13.627 ************************************ 00:28:13.627 START TEST nvmf_shutdown 00:28:13.627 ************************************ 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:28:13.627 * Looking for test storage... 00:28:13.627 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:13.627 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:13.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.628 --rc genhtml_branch_coverage=1 00:28:13.628 --rc genhtml_function_coverage=1 00:28:13.628 --rc genhtml_legend=1 00:28:13.628 --rc geninfo_all_blocks=1 00:28:13.628 --rc geninfo_unexecuted_blocks=1 00:28:13.628 00:28:13.628 ' 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:13.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.628 --rc genhtml_branch_coverage=1 00:28:13.628 --rc genhtml_function_coverage=1 00:28:13.628 --rc genhtml_legend=1 00:28:13.628 --rc geninfo_all_blocks=1 00:28:13.628 --rc geninfo_unexecuted_blocks=1 00:28:13.628 00:28:13.628 ' 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:13.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.628 --rc genhtml_branch_coverage=1 00:28:13.628 --rc genhtml_function_coverage=1 00:28:13.628 --rc genhtml_legend=1 00:28:13.628 --rc geninfo_all_blocks=1 00:28:13.628 --rc geninfo_unexecuted_blocks=1 00:28:13.628 00:28:13.628 ' 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:13.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.628 --rc genhtml_branch_coverage=1 00:28:13.628 --rc genhtml_function_coverage=1 00:28:13.628 --rc genhtml_legend=1 00:28:13.628 --rc geninfo_all_blocks=1 00:28:13.628 --rc geninfo_unexecuted_blocks=1 00:28:13.628 00:28:13.628 ' 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.628 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.889 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:13.889 ************************************ 00:28:13.889 START TEST nvmf_shutdown_tc1 00:28:13.889 ************************************ 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.889 11:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.031 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:22.032 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:22.032 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:22.032 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:22.032 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:22.032 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:22.032 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:22.032 altname enp217s0f0np0 00:28:22.032 altname ens818f0np0 00:28:22.032 inet 192.168.100.8/24 scope global mlx_0_0 00:28:22.032 valid_lft forever preferred_lft forever 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:22.032 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:22.033 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:22.033 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:22.033 altname enp217s0f1np1 00:28:22.033 altname ens818f1np1 00:28:22.033 inet 192.168.100.9/24 scope global mlx_0_1 00:28:22.033 valid_lft forever preferred_lft forever 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:22.033 192.168.100.9' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:22.033 192.168.100.9' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:22.033 192.168.100.9' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:22.033 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1804253 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1804253 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1804253 ']' 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.293 11:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.293 [2024-11-28 11:01:01.031235] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:22.293 [2024-11-28 11:01:01.031334] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.552 [2024-11-28 11:01:01.190847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:22.552 [2024-11-28 11:01:01.294213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.552 [2024-11-28 11:01:01.294264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.552 [2024-11-28 11:01:01.294276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.552 [2024-11-28 11:01:01.294289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.552 [2024-11-28 11:01:01.294298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.552 [2024-11-28 11:01:01.296782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.552 [2024-11-28 11:01:01.296822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.552 [2024-11-28 11:01:01.296906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.553 [2024-11-28 11:01:01.296930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:23.121 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.121 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:23.121 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:23.122 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:23.122 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:23.122 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.122 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:23.122 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.122 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:23.122 [2024-11-28 11:01:01.927335] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f4b93b0f940) succeed. 00:28:23.122 [2024-11-28 11:01:01.936843] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f4b931bd940) succeed. 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.381 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:23.641 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:23.641 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.641 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:23.641 Malloc1 00:28:23.641 [2024-11-28 11:01:02.359731] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:23.641 Malloc2 00:28:23.641 Malloc3 00:28:23.901 Malloc4 00:28:23.901 Malloc5 00:28:24.161 Malloc6 00:28:24.161 Malloc7 00:28:24.161 Malloc8 00:28:24.422 Malloc9 00:28:24.422 Malloc10 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1804677 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1804677 /var/tmp/bdevperf.sock 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1804677 ']' 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:24.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.422 { 00:28:24.422 "params": { 00:28:24.422 "name": "Nvme$subsystem", 00:28:24.422 "trtype": "$TEST_TRANSPORT", 00:28:24.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.422 "adrfam": "ipv4", 00:28:24.422 "trsvcid": "$NVMF_PORT", 00:28:24.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.422 "hdgst": ${hdgst:-false}, 00:28:24.422 "ddgst": ${ddgst:-false} 00:28:24.422 }, 00:28:24.422 "method": "bdev_nvme_attach_controller" 00:28:24.422 } 00:28:24.422 EOF 00:28:24.422 )") 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.422 { 00:28:24.422 "params": { 00:28:24.422 "name": "Nvme$subsystem", 00:28:24.422 "trtype": "$TEST_TRANSPORT", 00:28:24.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.422 "adrfam": "ipv4", 00:28:24.422 "trsvcid": "$NVMF_PORT", 00:28:24.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.422 "hdgst": ${hdgst:-false}, 00:28:24.422 "ddgst": ${ddgst:-false} 00:28:24.422 }, 00:28:24.422 "method": "bdev_nvme_attach_controller" 00:28:24.422 } 00:28:24.422 EOF 00:28:24.422 )") 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.422 { 00:28:24.422 "params": { 00:28:24.422 "name": "Nvme$subsystem", 00:28:24.422 "trtype": "$TEST_TRANSPORT", 00:28:24.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.422 "adrfam": "ipv4", 00:28:24.422 "trsvcid": "$NVMF_PORT", 00:28:24.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.422 "hdgst": ${hdgst:-false}, 00:28:24.422 "ddgst": ${ddgst:-false} 00:28:24.422 }, 00:28:24.422 "method": "bdev_nvme_attach_controller" 00:28:24.422 } 00:28:24.422 EOF 00:28:24.422 )") 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.422 { 00:28:24.422 "params": { 00:28:24.422 "name": "Nvme$subsystem", 00:28:24.422 "trtype": "$TEST_TRANSPORT", 00:28:24.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.422 "adrfam": "ipv4", 00:28:24.422 "trsvcid": "$NVMF_PORT", 00:28:24.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.422 "hdgst": ${hdgst:-false}, 00:28:24.422 "ddgst": ${ddgst:-false} 00:28:24.422 }, 00:28:24.422 "method": "bdev_nvme_attach_controller" 00:28:24.422 } 00:28:24.422 EOF 00:28:24.422 )") 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.422 { 00:28:24.422 "params": { 00:28:24.422 "name": "Nvme$subsystem", 00:28:24.422 "trtype": "$TEST_TRANSPORT", 00:28:24.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.422 "adrfam": "ipv4", 00:28:24.422 "trsvcid": "$NVMF_PORT", 00:28:24.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.422 "hdgst": ${hdgst:-false}, 00:28:24.422 "ddgst": ${ddgst:-false} 00:28:24.422 }, 00:28:24.422 "method": "bdev_nvme_attach_controller" 00:28:24.422 } 00:28:24.422 EOF 00:28:24.422 )") 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.422 { 00:28:24.422 "params": { 00:28:24.422 "name": "Nvme$subsystem", 00:28:24.422 "trtype": "$TEST_TRANSPORT", 00:28:24.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.422 "adrfam": "ipv4", 00:28:24.422 "trsvcid": "$NVMF_PORT", 00:28:24.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.422 "hdgst": ${hdgst:-false}, 00:28:24.422 "ddgst": ${ddgst:-false} 00:28:24.422 }, 00:28:24.422 "method": "bdev_nvme_attach_controller" 00:28:24.422 } 00:28:24.422 EOF 00:28:24.422 )") 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.422 { 00:28:24.422 "params": { 00:28:24.422 "name": "Nvme$subsystem", 00:28:24.422 "trtype": "$TEST_TRANSPORT", 00:28:24.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.422 "adrfam": "ipv4", 00:28:24.422 "trsvcid": "$NVMF_PORT", 00:28:24.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.422 "hdgst": ${hdgst:-false}, 00:28:24.422 "ddgst": ${ddgst:-false} 00:28:24.422 }, 00:28:24.422 "method": "bdev_nvme_attach_controller" 00:28:24.422 } 00:28:24.422 EOF 00:28:24.422 )") 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:24.422 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.423 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.423 { 00:28:24.423 "params": { 00:28:24.423 "name": "Nvme$subsystem", 00:28:24.423 "trtype": "$TEST_TRANSPORT", 00:28:24.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.423 "adrfam": "ipv4", 00:28:24.423 "trsvcid": "$NVMF_PORT", 00:28:24.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.423 "hdgst": ${hdgst:-false}, 00:28:24.423 "ddgst": ${ddgst:-false} 00:28:24.423 }, 00:28:24.423 "method": "bdev_nvme_attach_controller" 00:28:24.423 } 00:28:24.423 EOF 00:28:24.423 )") 00:28:24.423 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:24.423 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.423 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.423 { 00:28:24.423 "params": { 00:28:24.423 "name": "Nvme$subsystem", 00:28:24.423 "trtype": "$TEST_TRANSPORT", 00:28:24.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.423 "adrfam": "ipv4", 00:28:24.423 "trsvcid": "$NVMF_PORT", 00:28:24.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.423 "hdgst": ${hdgst:-false}, 00:28:24.423 "ddgst": ${ddgst:-false} 00:28:24.423 }, 00:28:24.423 "method": "bdev_nvme_attach_controller" 00:28:24.423 } 00:28:24.423 EOF 00:28:24.423 )") 00:28:24.423 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:24.683 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.683 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.683 { 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme$subsystem", 00:28:24.683 "trtype": "$TEST_TRANSPORT", 00:28:24.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "$NVMF_PORT", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.683 "hdgst": ${hdgst:-false}, 00:28:24.683 "ddgst": ${ddgst:-false} 00:28:24.683 }, 00:28:24.683 "method": "bdev_nvme_attach_controller" 00:28:24.683 } 00:28:24.683 EOF 00:28:24.683 )") 00:28:24.683 [2024-11-28 11:01:03.304569] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:24.683 [2024-11-28 11:01:03.304665] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:24.683 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:24.683 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:24.683 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:24.683 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme1", 00:28:24.683 "trtype": "rdma", 00:28:24.683 "traddr": "192.168.100.8", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "4420", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:24.683 "hdgst": false, 00:28:24.683 "ddgst": false 00:28:24.683 }, 00:28:24.683 "method": "bdev_nvme_attach_controller" 00:28:24.683 },{ 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme2", 00:28:24.683 "trtype": "rdma", 00:28:24.683 "traddr": "192.168.100.8", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "4420", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:24.683 "hdgst": false, 00:28:24.683 "ddgst": false 00:28:24.683 }, 00:28:24.683 "method": "bdev_nvme_attach_controller" 00:28:24.683 },{ 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme3", 00:28:24.683 "trtype": "rdma", 00:28:24.683 "traddr": "192.168.100.8", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "4420", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:24.683 "hdgst": false, 00:28:24.683 "ddgst": false 00:28:24.683 }, 00:28:24.683 "method": "bdev_nvme_attach_controller" 00:28:24.683 },{ 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme4", 00:28:24.683 "trtype": "rdma", 00:28:24.683 "traddr": "192.168.100.8", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "4420", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:24.683 "hdgst": false, 00:28:24.683 "ddgst": false 00:28:24.683 }, 00:28:24.683 "method": "bdev_nvme_attach_controller" 00:28:24.683 },{ 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme5", 00:28:24.683 "trtype": "rdma", 00:28:24.683 "traddr": "192.168.100.8", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "4420", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:24.683 "hdgst": false, 00:28:24.683 "ddgst": false 00:28:24.683 }, 00:28:24.683 "method": "bdev_nvme_attach_controller" 00:28:24.683 },{ 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme6", 00:28:24.683 "trtype": "rdma", 00:28:24.683 "traddr": "192.168.100.8", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "4420", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:24.683 "hdgst": false, 00:28:24.683 "ddgst": false 00:28:24.683 }, 00:28:24.683 "method": "bdev_nvme_attach_controller" 00:28:24.683 },{ 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme7", 00:28:24.683 "trtype": "rdma", 00:28:24.683 "traddr": "192.168.100.8", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "4420", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:24.683 "hdgst": false, 00:28:24.683 "ddgst": false 00:28:24.683 }, 00:28:24.683 "method": "bdev_nvme_attach_controller" 00:28:24.683 },{ 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme8", 00:28:24.683 "trtype": "rdma", 00:28:24.683 "traddr": "192.168.100.8", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "4420", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:24.683 "hdgst": false, 00:28:24.683 "ddgst": false 00:28:24.683 }, 00:28:24.683 "method": "bdev_nvme_attach_controller" 00:28:24.683 },{ 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme9", 00:28:24.683 "trtype": "rdma", 00:28:24.683 "traddr": "192.168.100.8", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "4420", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:24.683 "hdgst": false, 00:28:24.683 "ddgst": false 00:28:24.683 }, 00:28:24.683 "method": "bdev_nvme_attach_controller" 00:28:24.683 },{ 00:28:24.683 "params": { 00:28:24.683 "name": "Nvme10", 00:28:24.683 "trtype": "rdma", 00:28:24.683 "traddr": "192.168.100.8", 00:28:24.683 "adrfam": "ipv4", 00:28:24.683 "trsvcid": "4420", 00:28:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:24.683 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:24.683 "hdgst": false, 00:28:24.683 "ddgst": false 00:28:24.683 }, 00:28:24.684 "method": "bdev_nvme_attach_controller" 00:28:24.684 }' 00:28:24.684 [2024-11-28 11:01:03.462317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.943 [2024-11-28 11:01:03.568897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.881 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.881 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:25.881 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:25.881 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.881 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.881 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.881 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1804677 00:28:25.881 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:25.881 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:26.817 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1804677 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1804253 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.817 { 00:28:26.817 "params": { 00:28:26.817 "name": "Nvme$subsystem", 00:28:26.817 "trtype": "$TEST_TRANSPORT", 00:28:26.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.817 "adrfam": "ipv4", 00:28:26.817 "trsvcid": "$NVMF_PORT", 00:28:26.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.817 "hdgst": ${hdgst:-false}, 00:28:26.817 "ddgst": ${ddgst:-false} 00:28:26.817 }, 00:28:26.817 "method": "bdev_nvme_attach_controller" 00:28:26.817 } 00:28:26.817 EOF 00:28:26.817 )") 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.817 { 00:28:26.817 "params": { 00:28:26.817 "name": "Nvme$subsystem", 00:28:26.817 "trtype": "$TEST_TRANSPORT", 00:28:26.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.817 "adrfam": "ipv4", 00:28:26.817 "trsvcid": "$NVMF_PORT", 00:28:26.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.817 "hdgst": ${hdgst:-false}, 00:28:26.817 "ddgst": ${ddgst:-false} 00:28:26.817 }, 00:28:26.817 "method": "bdev_nvme_attach_controller" 00:28:26.817 } 00:28:26.817 EOF 00:28:26.817 )") 00:28:26.817 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.077 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.077 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.077 { 00:28:27.077 "params": { 00:28:27.077 "name": "Nvme$subsystem", 00:28:27.077 "trtype": "$TEST_TRANSPORT", 00:28:27.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.077 "adrfam": "ipv4", 00:28:27.077 "trsvcid": "$NVMF_PORT", 00:28:27.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.077 "hdgst": ${hdgst:-false}, 00:28:27.077 "ddgst": ${ddgst:-false} 00:28:27.077 }, 00:28:27.077 "method": "bdev_nvme_attach_controller" 00:28:27.077 } 00:28:27.077 EOF 00:28:27.077 )") 00:28:27.077 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.077 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.077 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.077 { 00:28:27.077 "params": { 00:28:27.077 "name": "Nvme$subsystem", 00:28:27.077 "trtype": "$TEST_TRANSPORT", 00:28:27.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.077 "adrfam": "ipv4", 00:28:27.077 "trsvcid": "$NVMF_PORT", 00:28:27.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.077 "hdgst": ${hdgst:-false}, 00:28:27.077 "ddgst": ${ddgst:-false} 00:28:27.077 }, 00:28:27.077 "method": "bdev_nvme_attach_controller" 00:28:27.077 } 00:28:27.077 EOF 00:28:27.077 )") 00:28:27.077 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.077 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.077 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.077 { 00:28:27.077 "params": { 00:28:27.077 "name": "Nvme$subsystem", 00:28:27.077 "trtype": "$TEST_TRANSPORT", 00:28:27.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.077 "adrfam": "ipv4", 00:28:27.077 "trsvcid": "$NVMF_PORT", 00:28:27.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.077 "hdgst": ${hdgst:-false}, 00:28:27.077 "ddgst": ${ddgst:-false} 00:28:27.077 }, 00:28:27.077 "method": "bdev_nvme_attach_controller" 00:28:27.077 } 00:28:27.077 EOF 00:28:27.077 )") 00:28:27.077 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.077 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.078 { 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme$subsystem", 00:28:27.078 "trtype": "$TEST_TRANSPORT", 00:28:27.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "$NVMF_PORT", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.078 "hdgst": ${hdgst:-false}, 00:28:27.078 "ddgst": ${ddgst:-false} 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 } 00:28:27.078 EOF 00:28:27.078 )") 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.078 { 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme$subsystem", 00:28:27.078 "trtype": "$TEST_TRANSPORT", 00:28:27.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "$NVMF_PORT", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.078 "hdgst": ${hdgst:-false}, 00:28:27.078 "ddgst": ${ddgst:-false} 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 } 00:28:27.078 EOF 00:28:27.078 )") 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.078 { 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme$subsystem", 00:28:27.078 "trtype": "$TEST_TRANSPORT", 00:28:27.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "$NVMF_PORT", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.078 "hdgst": ${hdgst:-false}, 00:28:27.078 "ddgst": ${ddgst:-false} 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 } 00:28:27.078 EOF 00:28:27.078 )") 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.078 { 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme$subsystem", 00:28:27.078 "trtype": "$TEST_TRANSPORT", 00:28:27.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "$NVMF_PORT", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.078 "hdgst": ${hdgst:-false}, 00:28:27.078 "ddgst": ${ddgst:-false} 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 } 00:28:27.078 EOF 00:28:27.078 )") 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.078 { 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme$subsystem", 00:28:27.078 "trtype": "$TEST_TRANSPORT", 00:28:27.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "$NVMF_PORT", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.078 "hdgst": ${hdgst:-false}, 00:28:27.078 "ddgst": ${ddgst:-false} 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 } 00:28:27.078 EOF 00:28:27.078 )") 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.078 [2024-11-28 11:01:05.763571] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:27.078 [2024-11-28 11:01:05.763673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1805094 ] 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:27.078 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme1", 00:28:27.078 "trtype": "rdma", 00:28:27.078 "traddr": "192.168.100.8", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "4420", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:27.078 "hdgst": false, 00:28:27.078 "ddgst": false 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 },{ 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme2", 00:28:27.078 "trtype": "rdma", 00:28:27.078 "traddr": "192.168.100.8", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "4420", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:27.078 "hdgst": false, 00:28:27.078 "ddgst": false 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 },{ 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme3", 00:28:27.078 "trtype": "rdma", 00:28:27.078 "traddr": "192.168.100.8", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "4420", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:27.078 "hdgst": false, 00:28:27.078 "ddgst": false 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 },{ 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme4", 00:28:27.078 "trtype": "rdma", 00:28:27.078 "traddr": "192.168.100.8", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "4420", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:27.078 "hdgst": false, 00:28:27.078 "ddgst": false 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 },{ 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme5", 00:28:27.078 "trtype": "rdma", 00:28:27.078 "traddr": "192.168.100.8", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "4420", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:27.078 "hdgst": false, 00:28:27.078 "ddgst": false 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 },{ 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme6", 00:28:27.078 "trtype": "rdma", 00:28:27.078 "traddr": "192.168.100.8", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "4420", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:27.078 "hdgst": false, 00:28:27.078 "ddgst": false 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 },{ 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme7", 00:28:27.078 "trtype": "rdma", 00:28:27.078 "traddr": "192.168.100.8", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "4420", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:27.078 "hdgst": false, 00:28:27.078 "ddgst": false 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 },{ 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme8", 00:28:27.078 "trtype": "rdma", 00:28:27.078 "traddr": "192.168.100.8", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "4420", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:27.078 "hdgst": false, 00:28:27.078 "ddgst": false 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 },{ 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme9", 00:28:27.078 "trtype": "rdma", 00:28:27.078 "traddr": "192.168.100.8", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "4420", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:27.078 "hdgst": false, 00:28:27.078 "ddgst": false 00:28:27.078 }, 00:28:27.078 "method": "bdev_nvme_attach_controller" 00:28:27.078 },{ 00:28:27.078 "params": { 00:28:27.078 "name": "Nvme10", 00:28:27.078 "trtype": "rdma", 00:28:27.078 "traddr": "192.168.100.8", 00:28:27.078 "adrfam": "ipv4", 00:28:27.078 "trsvcid": "4420", 00:28:27.078 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:27.078 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:27.078 "hdgst": false, 00:28:27.079 "ddgst": false 00:28:27.079 }, 00:28:27.079 "method": "bdev_nvme_attach_controller" 00:28:27.079 }' 00:28:27.079 [2024-11-28 11:01:05.922038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.338 [2024-11-28 11:01:06.025586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.718 Running I/O for 1 seconds... 00:28:29.657 3173.00 IOPS, 198.31 MiB/s 00:28:29.657 Latency(us) 00:28:29.657 [2024-11-28T10:01:08.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.657 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.657 Verification LBA range: start 0x0 length 0x400 00:28:29.657 Nvme1n1 : 1.18 326.76 20.42 0.00 0.00 192324.05 6710.89 243269.63 00:28:29.657 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.657 Verification LBA range: start 0x0 length 0x400 00:28:29.657 Nvme2n1 : 1.18 351.73 21.98 0.00 0.00 175913.87 6763.32 173644.19 00:28:29.657 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.657 Verification LBA range: start 0x0 length 0x400 00:28:29.657 Nvme3n1 : 1.18 351.31 21.96 0.00 0.00 173633.91 13159.63 166094.44 00:28:29.657 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.657 Verification LBA range: start 0x0 length 0x400 00:28:29.657 Nvme4n1 : 1.19 363.46 22.72 0.00 0.00 165721.32 2411.72 158544.69 00:28:29.657 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.657 Verification LBA range: start 0x0 length 0x400 00:28:29.657 Nvme5n1 : 1.19 328.66 20.54 0.00 0.00 179273.96 14784.92 147639.50 00:28:29.657 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.657 Verification LBA range: start 0x0 length 0x400 00:28:29.657 Nvme6n1 : 1.19 336.79 21.05 0.00 0.00 172786.97 14260.63 136734.31 00:28:29.657 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.657 Verification LBA range: start 0x0 length 0x400 00:28:29.657 Nvme7n1 : 1.19 349.84 21.86 0.00 0.00 164372.86 14365.49 129184.56 00:28:29.657 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.657 Verification LBA range: start 0x0 length 0x400 00:28:29.657 Nvme8n1 : 1.19 324.41 20.28 0.00 0.00 173508.90 13841.20 119957.09 00:28:29.657 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.657 Verification LBA range: start 0x0 length 0x400 00:28:29.657 Nvme9n1 : 1.18 326.12 20.38 0.00 0.00 171865.43 13631.49 107374.18 00:28:29.657 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.657 Verification LBA range: start 0x0 length 0x400 00:28:29.657 Nvme10n1 : 1.18 325.51 20.34 0.00 0.00 169223.24 13631.49 124151.40 00:28:29.657 [2024-11-28T10:01:08.538Z] =================================================================================================================== 00:28:29.657 [2024-11-28T10:01:08.538Z] Total : 3384.57 211.54 0.00 0.00 173718.65 2411.72 243269.63 00:28:31.038 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:31.038 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:31.038 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:31.038 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:31.038 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:31.038 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:31.038 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:31.038 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:31.038 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:31.038 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:31.039 rmmod nvme_rdma 00:28:31.039 rmmod nvme_fabrics 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1804253 ']' 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1804253 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1804253 ']' 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1804253 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1804253 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1804253' 00:28:31.039 killing process with pid 1804253 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1804253 00:28:31.039 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1804253 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:34.330 00:28:34.330 real 0m20.480s 00:28:34.330 user 0m51.086s 00:28:34.330 sys 0m8.087s 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.330 ************************************ 00:28:34.330 END TEST nvmf_shutdown_tc1 00:28:34.330 ************************************ 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:34.330 ************************************ 00:28:34.330 START TEST nvmf_shutdown_tc2 00:28:34.330 ************************************ 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:34.330 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:34.331 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:34.331 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:34.331 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:34.331 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:34.331 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:34.592 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:34.592 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:34.592 altname enp217s0f0np0 00:28:34.592 altname ens818f0np0 00:28:34.592 inet 192.168.100.8/24 scope global mlx_0_0 00:28:34.592 valid_lft forever preferred_lft forever 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:34.592 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:34.592 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:34.592 altname enp217s0f1np1 00:28:34.592 altname ens818f1np1 00:28:34.592 inet 192.168.100.9/24 scope global mlx_0_1 00:28:34.592 valid_lft forever preferred_lft forever 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:34.592 192.168.100.9' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:34.592 192.168.100.9' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:34.592 192.168.100.9' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.592 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.593 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1806414 00:28:34.593 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1806414 00:28:34.593 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1806414 ']' 00:28:34.593 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.593 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.593 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.593 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.593 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.593 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:34.593 [2024-11-28 11:01:13.456232] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:34.593 [2024-11-28 11:01:13.456326] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.852 [2024-11-28 11:01:13.609996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.852 [2024-11-28 11:01:13.710435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.852 [2024-11-28 11:01:13.710484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.852 [2024-11-28 11:01:13.710496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.852 [2024-11-28 11:01:13.710525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.852 [2024-11-28 11:01:13.710535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.852 [2024-11-28 11:01:13.713086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.852 [2024-11-28 11:01:13.713151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.852 [2024-11-28 11:01:13.713270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.852 [2024-11-28 11:01:13.713296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:35.420 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.420 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:35.420 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:35.420 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.420 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.420 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.420 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:35.420 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.420 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.679 [2024-11-28 11:01:14.335996] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fbf6df71940) succeed. 00:28:35.679 [2024-11-28 11:01:14.345429] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fbf6df2d940) succeed. 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.939 11:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.939 Malloc1 00:28:35.939 [2024-11-28 11:01:14.762337] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:36.198 Malloc2 00:28:36.198 Malloc3 00:28:36.198 Malloc4 00:28:36.464 Malloc5 00:28:36.464 Malloc6 00:28:36.464 Malloc7 00:28:36.766 Malloc8 00:28:36.766 Malloc9 00:28:36.766 Malloc10 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1806938 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1806938 /var/tmp/bdevperf.sock 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1806938 ']' 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:36.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.766 { 00:28:36.766 "params": { 00:28:36.766 "name": "Nvme$subsystem", 00:28:36.766 "trtype": "$TEST_TRANSPORT", 00:28:36.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.766 "adrfam": "ipv4", 00:28:36.766 "trsvcid": "$NVMF_PORT", 00:28:36.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.766 "hdgst": ${hdgst:-false}, 00:28:36.766 "ddgst": ${ddgst:-false} 00:28:36.766 }, 00:28:36.766 "method": "bdev_nvme_attach_controller" 00:28:36.766 } 00:28:36.766 EOF 00:28:36.766 )") 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.766 { 00:28:36.766 "params": { 00:28:36.766 "name": "Nvme$subsystem", 00:28:36.766 "trtype": "$TEST_TRANSPORT", 00:28:36.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.766 "adrfam": "ipv4", 00:28:36.766 "trsvcid": "$NVMF_PORT", 00:28:36.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.766 "hdgst": ${hdgst:-false}, 00:28:36.766 "ddgst": ${ddgst:-false} 00:28:36.766 }, 00:28:36.766 "method": "bdev_nvme_attach_controller" 00:28:36.766 } 00:28:36.766 EOF 00:28:36.766 )") 00:28:36.766 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.026 { 00:28:37.026 "params": { 00:28:37.026 "name": "Nvme$subsystem", 00:28:37.026 "trtype": "$TEST_TRANSPORT", 00:28:37.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.026 "adrfam": "ipv4", 00:28:37.026 "trsvcid": "$NVMF_PORT", 00:28:37.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.026 "hdgst": ${hdgst:-false}, 00:28:37.026 "ddgst": ${ddgst:-false} 00:28:37.026 }, 00:28:37.026 "method": "bdev_nvme_attach_controller" 00:28:37.026 } 00:28:37.026 EOF 00:28:37.026 )") 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.026 { 00:28:37.026 "params": { 00:28:37.026 "name": "Nvme$subsystem", 00:28:37.026 "trtype": "$TEST_TRANSPORT", 00:28:37.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.026 "adrfam": "ipv4", 00:28:37.026 "trsvcid": "$NVMF_PORT", 00:28:37.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.026 "hdgst": ${hdgst:-false}, 00:28:37.026 "ddgst": ${ddgst:-false} 00:28:37.026 }, 00:28:37.026 "method": "bdev_nvme_attach_controller" 00:28:37.026 } 00:28:37.026 EOF 00:28:37.026 )") 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.026 { 00:28:37.026 "params": { 00:28:37.026 "name": "Nvme$subsystem", 00:28:37.026 "trtype": "$TEST_TRANSPORT", 00:28:37.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.026 "adrfam": "ipv4", 00:28:37.026 "trsvcid": "$NVMF_PORT", 00:28:37.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.026 "hdgst": ${hdgst:-false}, 00:28:37.026 "ddgst": ${ddgst:-false} 00:28:37.026 }, 00:28:37.026 "method": "bdev_nvme_attach_controller" 00:28:37.026 } 00:28:37.026 EOF 00:28:37.026 )") 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.026 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.026 { 00:28:37.026 "params": { 00:28:37.026 "name": "Nvme$subsystem", 00:28:37.026 "trtype": "$TEST_TRANSPORT", 00:28:37.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.026 "adrfam": "ipv4", 00:28:37.026 "trsvcid": "$NVMF_PORT", 00:28:37.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.026 "hdgst": ${hdgst:-false}, 00:28:37.026 "ddgst": ${ddgst:-false} 00:28:37.026 }, 00:28:37.026 "method": "bdev_nvme_attach_controller" 00:28:37.026 } 00:28:37.027 EOF 00:28:37.027 )") 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.027 { 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme$subsystem", 00:28:37.027 "trtype": "$TEST_TRANSPORT", 00:28:37.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "$NVMF_PORT", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.027 "hdgst": ${hdgst:-false}, 00:28:37.027 "ddgst": ${ddgst:-false} 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 } 00:28:37.027 EOF 00:28:37.027 )") 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.027 { 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme$subsystem", 00:28:37.027 "trtype": "$TEST_TRANSPORT", 00:28:37.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "$NVMF_PORT", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.027 "hdgst": ${hdgst:-false}, 00:28:37.027 "ddgst": ${ddgst:-false} 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 } 00:28:37.027 EOF 00:28:37.027 )") 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.027 { 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme$subsystem", 00:28:37.027 "trtype": "$TEST_TRANSPORT", 00:28:37.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "$NVMF_PORT", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.027 "hdgst": ${hdgst:-false}, 00:28:37.027 "ddgst": ${ddgst:-false} 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 } 00:28:37.027 EOF 00:28:37.027 )") 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.027 { 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme$subsystem", 00:28:37.027 "trtype": "$TEST_TRANSPORT", 00:28:37.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "$NVMF_PORT", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.027 "hdgst": ${hdgst:-false}, 00:28:37.027 "ddgst": ${ddgst:-false} 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 } 00:28:37.027 EOF 00:28:37.027 )") 00:28:37.027 [2024-11-28 11:01:15.711044] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:37.027 [2024-11-28 11:01:15.711134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806938 ] 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:37.027 11:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme1", 00:28:37.027 "trtype": "rdma", 00:28:37.027 "traddr": "192.168.100.8", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "4420", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.027 "hdgst": false, 00:28:37.027 "ddgst": false 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 },{ 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme2", 00:28:37.027 "trtype": "rdma", 00:28:37.027 "traddr": "192.168.100.8", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "4420", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:37.027 "hdgst": false, 00:28:37.027 "ddgst": false 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 },{ 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme3", 00:28:37.027 "trtype": "rdma", 00:28:37.027 "traddr": "192.168.100.8", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "4420", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:37.027 "hdgst": false, 00:28:37.027 "ddgst": false 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 },{ 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme4", 00:28:37.027 "trtype": "rdma", 00:28:37.027 "traddr": "192.168.100.8", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "4420", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:37.027 "hdgst": false, 00:28:37.027 "ddgst": false 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 },{ 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme5", 00:28:37.027 "trtype": "rdma", 00:28:37.027 "traddr": "192.168.100.8", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "4420", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:37.027 "hdgst": false, 00:28:37.027 "ddgst": false 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 },{ 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme6", 00:28:37.027 "trtype": "rdma", 00:28:37.027 "traddr": "192.168.100.8", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "4420", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:37.027 "hdgst": false, 00:28:37.027 "ddgst": false 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 },{ 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme7", 00:28:37.027 "trtype": "rdma", 00:28:37.027 "traddr": "192.168.100.8", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "4420", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:37.027 "hdgst": false, 00:28:37.027 "ddgst": false 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 },{ 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme8", 00:28:37.027 "trtype": "rdma", 00:28:37.027 "traddr": "192.168.100.8", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "4420", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:37.027 "hdgst": false, 00:28:37.027 "ddgst": false 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 },{ 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme9", 00:28:37.027 "trtype": "rdma", 00:28:37.027 "traddr": "192.168.100.8", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "4420", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:37.027 "hdgst": false, 00:28:37.027 "ddgst": false 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 },{ 00:28:37.027 "params": { 00:28:37.027 "name": "Nvme10", 00:28:37.027 "trtype": "rdma", 00:28:37.027 "traddr": "192.168.100.8", 00:28:37.027 "adrfam": "ipv4", 00:28:37.027 "trsvcid": "4420", 00:28:37.027 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:37.027 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:37.027 "hdgst": false, 00:28:37.027 "ddgst": false 00:28:37.027 }, 00:28:37.027 "method": "bdev_nvme_attach_controller" 00:28:37.027 }' 00:28:37.027 [2024-11-28 11:01:15.868153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.287 [2024-11-28 11:01:15.971118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.666 Running I/O for 10 seconds... 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:38.666 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:38.667 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:38.667 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.667 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.667 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.667 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:38.667 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:38.667 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:38.926 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:38.926 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:38.926 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:38.926 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:38.926 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.926 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.185 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=163 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 163 -ge 100 ']' 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1806938 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1806938 ']' 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1806938 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1806938 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1806938' 00:28:39.186 killing process with pid 1806938 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1806938 00:28:39.186 11:01:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1806938 00:28:39.186 Received shutdown signal, test time was about 0.910860 seconds 00:28:39.186 00:28:39.186 Latency(us) 00:28:39.186 [2024-11-28T10:01:18.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.186 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.186 Verification LBA range: start 0x0 length 0x400 00:28:39.186 Nvme1n1 : 0.89 322.15 20.13 0.00 0.00 194640.65 10328.47 234881.02 00:28:39.186 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.186 Verification LBA range: start 0x0 length 0x400 00:28:39.186 Nvme2n1 : 0.90 321.61 20.10 0.00 0.00 191244.42 10590.62 226492.42 00:28:39.186 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.186 Verification LBA range: start 0x0 length 0x400 00:28:39.186 Nvme3n1 : 0.90 356.83 22.30 0.00 0.00 168880.78 4613.73 159383.55 00:28:39.186 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.186 Verification LBA range: start 0x0 length 0x400 00:28:39.186 Nvme4n1 : 0.90 359.59 22.47 0.00 0.00 164573.15 5321.52 152672.67 00:28:39.186 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.186 Verification LBA range: start 0x0 length 0x400 00:28:39.186 Nvme5n1 : 0.90 355.52 22.22 0.00 0.00 163800.19 11481.91 140928.61 00:28:39.186 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.186 Verification LBA range: start 0x0 length 0x400 00:28:39.186 Nvme6n1 : 0.90 354.70 22.17 0.00 0.00 161212.25 12478.05 127506.84 00:28:39.186 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.186 Verification LBA range: start 0x0 length 0x400 00:28:39.186 Nvme7n1 : 0.90 354.06 22.13 0.00 0.00 157492.92 13159.63 118279.37 00:28:39.186 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.186 Verification LBA range: start 0x0 length 0x400 00:28:39.186 Nvme8n1 : 0.91 353.41 22.09 0.00 0.00 154540.93 13631.49 109051.90 00:28:39.186 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.186 Verification LBA range: start 0x0 length 0x400 00:28:39.186 Nvme9n1 : 0.91 352.55 22.03 0.00 0.00 152778.83 14680.06 104018.74 00:28:39.186 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.186 Verification LBA range: start 0x0 length 0x400 00:28:39.186 Nvme10n1 : 0.91 281.36 17.58 0.00 0.00 187078.96 11167.33 243269.63 00:28:39.186 [2024-11-28T10:01:18.067Z] =================================================================================================================== 00:28:39.186 [2024-11-28T10:01:18.067Z] Total : 3411.78 213.24 0.00 0.00 168770.78 4613.73 243269.63 00:28:40.566 11:01:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1806414 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:41.505 rmmod nvme_rdma 00:28:41.505 rmmod nvme_fabrics 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1806414 ']' 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1806414 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1806414 ']' 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1806414 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1806414 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1806414' 00:28:41.505 killing process with pid 1806414 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1806414 00:28:41.505 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1806414 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:45.702 00:28:45.702 real 0m10.610s 00:28:45.702 user 0m41.408s 00:28:45.702 sys 0m1.631s 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.702 ************************************ 00:28:45.702 END TEST nvmf_shutdown_tc2 00:28:45.702 ************************************ 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:45.702 ************************************ 00:28:45.702 START TEST nvmf_shutdown_tc3 00:28:45.702 ************************************ 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:45.702 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:45.702 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:45.702 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:45.703 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:45.703 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:45.703 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:45.703 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:45.703 altname enp217s0f0np0 00:28:45.703 altname ens818f0np0 00:28:45.703 inet 192.168.100.8/24 scope global mlx_0_0 00:28:45.703 valid_lft forever preferred_lft forever 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:45.703 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:45.703 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:45.703 altname enp217s0f1np1 00:28:45.703 altname ens818f1np1 00:28:45.703 inet 192.168.100.9/24 scope global mlx_0_1 00:28:45.703 valid_lft forever preferred_lft forever 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:45.703 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.704 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:45.704 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:45.704 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:45.704 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:45.704 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:45.704 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:45.704 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:45.704 11:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:45.704 192.168.100.9' 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:45.704 192.168.100.9' 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:45.704 192.168.100.9' 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1808433 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1808433 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1808433 ']' 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.704 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.704 [2024-11-28 11:01:24.178633] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:45.704 [2024-11-28 11:01:24.178744] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.704 [2024-11-28 11:01:24.333325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.704 [2024-11-28 11:01:24.431082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.704 [2024-11-28 11:01:24.431131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.704 [2024-11-28 11:01:24.431143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.704 [2024-11-28 11:01:24.431172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.704 [2024-11-28 11:01:24.431181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.704 [2024-11-28 11:01:24.433846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.704 [2024-11-28 11:01:24.433914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.704 [2024-11-28 11:01:24.434033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.704 [2024-11-28 11:01:24.434058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:46.275 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.275 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:46.275 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.275 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.275 11:01:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.275 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.275 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:46.275 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.275 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.275 [2024-11-28 11:01:25.078918] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f0cd0dbd940) succeed. 00:28:46.275 [2024-11-28 11:01:25.089127] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f0cd0d79940) succeed. 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.533 11:01:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.792 Malloc1 00:28:46.792 [2024-11-28 11:01:25.499684] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:46.792 Malloc2 00:28:46.792 Malloc3 00:28:47.051 Malloc4 00:28:47.051 Malloc5 00:28:47.310 Malloc6 00:28:47.310 Malloc7 00:28:47.310 Malloc8 00:28:47.568 Malloc9 00:28:47.568 Malloc10 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1808756 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1808756 /var/tmp/bdevperf.sock 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1808756 ']' 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:47.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.568 { 00:28:47.568 "params": { 00:28:47.568 "name": "Nvme$subsystem", 00:28:47.568 "trtype": "$TEST_TRANSPORT", 00:28:47.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.568 "adrfam": "ipv4", 00:28:47.568 "trsvcid": "$NVMF_PORT", 00:28:47.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.568 "hdgst": ${hdgst:-false}, 00:28:47.568 "ddgst": ${ddgst:-false} 00:28:47.568 }, 00:28:47.568 "method": "bdev_nvme_attach_controller" 00:28:47.568 } 00:28:47.568 EOF 00:28:47.568 )") 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.568 { 00:28:47.568 "params": { 00:28:47.568 "name": "Nvme$subsystem", 00:28:47.568 "trtype": "$TEST_TRANSPORT", 00:28:47.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.568 "adrfam": "ipv4", 00:28:47.568 "trsvcid": "$NVMF_PORT", 00:28:47.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.568 "hdgst": ${hdgst:-false}, 00:28:47.568 "ddgst": ${ddgst:-false} 00:28:47.568 }, 00:28:47.568 "method": "bdev_nvme_attach_controller" 00:28:47.568 } 00:28:47.568 EOF 00:28:47.568 )") 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.568 { 00:28:47.568 "params": { 00:28:47.568 "name": "Nvme$subsystem", 00:28:47.568 "trtype": "$TEST_TRANSPORT", 00:28:47.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.568 "adrfam": "ipv4", 00:28:47.568 "trsvcid": "$NVMF_PORT", 00:28:47.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.568 "hdgst": ${hdgst:-false}, 00:28:47.568 "ddgst": ${ddgst:-false} 00:28:47.568 }, 00:28:47.568 "method": "bdev_nvme_attach_controller" 00:28:47.568 } 00:28:47.568 EOF 00:28:47.568 )") 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.568 { 00:28:47.568 "params": { 00:28:47.568 "name": "Nvme$subsystem", 00:28:47.568 "trtype": "$TEST_TRANSPORT", 00:28:47.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.568 "adrfam": "ipv4", 00:28:47.568 "trsvcid": "$NVMF_PORT", 00:28:47.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.568 "hdgst": ${hdgst:-false}, 00:28:47.568 "ddgst": ${ddgst:-false} 00:28:47.568 }, 00:28:47.568 "method": "bdev_nvme_attach_controller" 00:28:47.568 } 00:28:47.568 EOF 00:28:47.568 )") 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.568 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.568 { 00:28:47.568 "params": { 00:28:47.568 "name": "Nvme$subsystem", 00:28:47.569 "trtype": "$TEST_TRANSPORT", 00:28:47.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.569 "adrfam": "ipv4", 00:28:47.569 "trsvcid": "$NVMF_PORT", 00:28:47.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.569 "hdgst": ${hdgst:-false}, 00:28:47.569 "ddgst": ${ddgst:-false} 00:28:47.569 }, 00:28:47.569 "method": "bdev_nvme_attach_controller" 00:28:47.569 } 00:28:47.569 EOF 00:28:47.569 )") 00:28:47.569 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:47.569 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.569 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.569 { 00:28:47.569 "params": { 00:28:47.569 "name": "Nvme$subsystem", 00:28:47.569 "trtype": "$TEST_TRANSPORT", 00:28:47.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.569 "adrfam": "ipv4", 00:28:47.569 "trsvcid": "$NVMF_PORT", 00:28:47.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.569 "hdgst": ${hdgst:-false}, 00:28:47.569 "ddgst": ${ddgst:-false} 00:28:47.569 }, 00:28:47.569 "method": "bdev_nvme_attach_controller" 00:28:47.569 } 00:28:47.569 EOF 00:28:47.569 )") 00:28:47.569 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:47.569 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.569 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.569 { 00:28:47.569 "params": { 00:28:47.569 "name": "Nvme$subsystem", 00:28:47.569 "trtype": "$TEST_TRANSPORT", 00:28:47.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.569 "adrfam": "ipv4", 00:28:47.569 "trsvcid": "$NVMF_PORT", 00:28:47.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.569 "hdgst": ${hdgst:-false}, 00:28:47.569 "ddgst": ${ddgst:-false} 00:28:47.569 }, 00:28:47.569 "method": "bdev_nvme_attach_controller" 00:28:47.569 } 00:28:47.569 EOF 00:28:47.569 )") 00:28:47.569 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:47.569 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.569 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.569 { 00:28:47.569 "params": { 00:28:47.569 "name": "Nvme$subsystem", 00:28:47.569 "trtype": "$TEST_TRANSPORT", 00:28:47.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.569 "adrfam": "ipv4", 00:28:47.569 "trsvcid": "$NVMF_PORT", 00:28:47.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.569 "hdgst": ${hdgst:-false}, 00:28:47.569 "ddgst": ${ddgst:-false} 00:28:47.569 }, 00:28:47.569 "method": "bdev_nvme_attach_controller" 00:28:47.569 } 00:28:47.569 EOF 00:28:47.569 )") 00:28:47.827 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:47.827 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.827 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.827 { 00:28:47.827 "params": { 00:28:47.827 "name": "Nvme$subsystem", 00:28:47.827 "trtype": "$TEST_TRANSPORT", 00:28:47.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.827 "adrfam": "ipv4", 00:28:47.827 "trsvcid": "$NVMF_PORT", 00:28:47.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.827 "hdgst": ${hdgst:-false}, 00:28:47.827 "ddgst": ${ddgst:-false} 00:28:47.827 }, 00:28:47.827 "method": "bdev_nvme_attach_controller" 00:28:47.827 } 00:28:47.827 EOF 00:28:47.827 )") 00:28:47.827 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:47.827 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.827 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.827 { 00:28:47.827 "params": { 00:28:47.827 "name": "Nvme$subsystem", 00:28:47.827 "trtype": "$TEST_TRANSPORT", 00:28:47.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.827 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "$NVMF_PORT", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.828 "hdgst": ${hdgst:-false}, 00:28:47.828 "ddgst": ${ddgst:-false} 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 } 00:28:47.828 EOF 00:28:47.828 )") 00:28:47.828 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:47.828 [2024-11-28 11:01:26.466142] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:47.828 [2024-11-28 11:01:26.466232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808756 ] 00:28:47.828 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:47.828 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:47.828 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:47.828 "params": { 00:28:47.828 "name": "Nvme1", 00:28:47.828 "trtype": "rdma", 00:28:47.828 "traddr": "192.168.100.8", 00:28:47.828 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "4420", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:47.828 "hdgst": false, 00:28:47.828 "ddgst": false 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 },{ 00:28:47.828 "params": { 00:28:47.828 "name": "Nvme2", 00:28:47.828 "trtype": "rdma", 00:28:47.828 "traddr": "192.168.100.8", 00:28:47.828 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "4420", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:47.828 "hdgst": false, 00:28:47.828 "ddgst": false 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 },{ 00:28:47.828 "params": { 00:28:47.828 "name": "Nvme3", 00:28:47.828 "trtype": "rdma", 00:28:47.828 "traddr": "192.168.100.8", 00:28:47.828 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "4420", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:47.828 "hdgst": false, 00:28:47.828 "ddgst": false 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 },{ 00:28:47.828 "params": { 00:28:47.828 "name": "Nvme4", 00:28:47.828 "trtype": "rdma", 00:28:47.828 "traddr": "192.168.100.8", 00:28:47.828 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "4420", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:47.828 "hdgst": false, 00:28:47.828 "ddgst": false 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 },{ 00:28:47.828 "params": { 00:28:47.828 "name": "Nvme5", 00:28:47.828 "trtype": "rdma", 00:28:47.828 "traddr": "192.168.100.8", 00:28:47.828 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "4420", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:47.828 "hdgst": false, 00:28:47.828 "ddgst": false 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 },{ 00:28:47.828 "params": { 00:28:47.828 "name": "Nvme6", 00:28:47.828 "trtype": "rdma", 00:28:47.828 "traddr": "192.168.100.8", 00:28:47.828 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "4420", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:47.828 "hdgst": false, 00:28:47.828 "ddgst": false 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 },{ 00:28:47.828 "params": { 00:28:47.828 "name": "Nvme7", 00:28:47.828 "trtype": "rdma", 00:28:47.828 "traddr": "192.168.100.8", 00:28:47.828 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "4420", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:47.828 "hdgst": false, 00:28:47.828 "ddgst": false 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 },{ 00:28:47.828 "params": { 00:28:47.828 "name": "Nvme8", 00:28:47.828 "trtype": "rdma", 00:28:47.828 "traddr": "192.168.100.8", 00:28:47.828 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "4420", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:47.828 "hdgst": false, 00:28:47.828 "ddgst": false 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 },{ 00:28:47.828 "params": { 00:28:47.828 "name": "Nvme9", 00:28:47.828 "trtype": "rdma", 00:28:47.828 "traddr": "192.168.100.8", 00:28:47.828 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "4420", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:47.828 "hdgst": false, 00:28:47.828 "ddgst": false 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 },{ 00:28:47.828 "params": { 00:28:47.828 "name": "Nvme10", 00:28:47.828 "trtype": "rdma", 00:28:47.828 "traddr": "192.168.100.8", 00:28:47.828 "adrfam": "ipv4", 00:28:47.828 "trsvcid": "4420", 00:28:47.828 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:47.828 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:47.828 "hdgst": false, 00:28:47.828 "ddgst": false 00:28:47.828 }, 00:28:47.828 "method": "bdev_nvme_attach_controller" 00:28:47.828 }' 00:28:47.828 [2024-11-28 11:01:26.625250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.087 [2024-11-28 11:01:26.728227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.024 Running I/O for 10 seconds... 00:28:49.024 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.024 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:49.024 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:49.024 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.024 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.284 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.543 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.543 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:49.543 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:49.543 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=155 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 155 -ge 100 ']' 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1808433 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1808433 ']' 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1808433 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.803 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1808433 00:28:50.063 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:50.063 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:50.063 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1808433' 00:28:50.063 killing process with pid 1808433 00:28:50.063 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1808433 00:28:50.063 11:01:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1808433 00:28:51.006 2485.00 IOPS, 155.31 MiB/s [2024-11-28T10:01:29.887Z] [2024-11-28 11:01:29.743983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.006 [2024-11-28 11:01:29.744052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.006 [2024-11-28 11:01:29.744071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.006 [2024-11-28 11:01:29.744085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.006 [2024-11-28 11:01:29.744098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.006 [2024-11-28 11:01:29.744111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.006 [2024-11-28 11:01:29.744127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.006 [2024-11-28 11:01:29.744139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.006 [2024-11-28 11:01:29.746953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.006 [2024-11-28 11:01:29.746980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:51.006 [2024-11-28 11:01:29.747008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.006 [2024-11-28 11:01:29.747022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.006 [2024-11-28 11:01:29.747036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.006 [2024-11-28 11:01:29.747047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.006 [2024-11-28 11:01:29.747060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.006 [2024-11-28 11:01:29.747072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.006 [2024-11-28 11:01:29.747085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.006 [2024-11-28 11:01:29.747096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.749376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.007 [2024-11-28 11:01:29.749396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:51.007 [2024-11-28 11:01:29.749418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.749432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.749445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.749458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.749470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.749482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.749494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.749506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.751852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.007 [2024-11-28 11:01:29.751870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:51.007 [2024-11-28 11:01:29.751892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.751909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.751922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.751934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.751946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.751958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.751970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.751981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.754350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.007 [2024-11-28 11:01:29.754369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:51.007 [2024-11-28 11:01:29.754391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.754405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.754418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.754429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.754441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.754453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.754466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.754477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.756983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.007 [2024-11-28 11:01:29.757002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:51.007 [2024-11-28 11:01:29.757030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.757043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.757056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.757067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.757080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.757092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.757104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.757118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.759534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.007 [2024-11-28 11:01:29.759557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:51.007 [2024-11-28 11:01:29.759586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.759603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.759625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.759640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.759657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.759672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.759688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.759704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.761677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.007 [2024-11-28 11:01:29.761701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:51.007 [2024-11-28 11:01:29.761728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.761745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.761761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.761777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.761793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.761809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.761825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.761840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.764249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.007 [2024-11-28 11:01:29.764272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:51.007 [2024-11-28 11:01:29.764300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.764317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.764333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.007 [2024-11-28 11:01:29.764353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.007 [2024-11-28 11:01:29.764369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.008 [2024-11-28 11:01:29.764385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.764401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.008 [2024-11-28 11:01:29.764417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.766804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.008 [2024-11-28 11:01:29.766828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:51.008 [2024-11-28 11:01:29.766858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.008 [2024-11-28 11:01:29.766875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.766892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.008 [2024-11-28 11:01:29.766908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.766925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.008 [2024-11-28 11:01:29.766940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.766957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.008 [2024-11-28 11:01:29.766972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.769065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.008 [2024-11-28 11:01:29.769089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:51.008 [2024-11-28 11:01:29.771634] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:51.008 [2024-11-28 11:01:29.774335] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:51.008 [2024-11-28 11:01:29.776667] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:51.008 [2024-11-28 11:01:29.779150] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:51.008 [2024-11-28 11:01:29.781313] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:51.008 [2024-11-28 11:01:29.783665] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:51.008 [2024-11-28 11:01:29.785971] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:51.008 [2024-11-28 11:01:29.788252] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:51.008 [2024-11-28 11:01:29.788371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf300 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf240 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf180 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf0c0 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f000 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8ef40 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7ee80 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6edc0 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5ed00 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4ec40 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3eb80 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2eac0 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1ea00 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0e940 len:0x10000 key:0x183d00 00:28:51.008 [2024-11-28 11:01:29.788967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.788989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002deffc0 len:0x10000 key:0x183e00 00:28:51.008 [2024-11-28 11:01:29.789008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.789032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff00 len:0x10000 key:0x183e00 00:28:51.008 [2024-11-28 11:01:29.789049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.789072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcfe40 len:0x10000 key:0x183e00 00:28:51.008 [2024-11-28 11:01:29.789089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.008 [2024-11-28 11:01:29.789112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfd80 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafcc0 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fc00 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fb40 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fa80 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6f9c0 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5f900 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4f840 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3f780 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2f6c0 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f600 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f540 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff480 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef3c0 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf300 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf240 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf180 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf0c0 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f000 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.789977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8ef40 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.789994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.790017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7ee80 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.790035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.790058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6edc0 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.790075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.790100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5ed00 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.790117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.790140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4ec40 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.790157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.790179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3eb80 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.790197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.790220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2eac0 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.790237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.790260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1ea00 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.790277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.790299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0e940 len:0x10000 key:0x183e00 00:28:51.009 [2024-11-28 11:01:29.790316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.790338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002feffc0 len:0x10000 key:0x184600 00:28:51.009 [2024-11-28 11:01:29.790358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.009 [2024-11-28 11:01:29.790380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff00 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcfe40 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfd80 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafcc0 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fc00 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fb40 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fa80 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6f9c0 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5f900 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4f840 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3f780 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2f6c0 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f600 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f540 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff480 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.790965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.790987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef3c0 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.791004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.791026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf300 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.791043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.791066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef3c0 len:0x10000 key:0x183d00 00:28:51.010 [2024-11-28 11:01:29.791083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794306] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:51.010 [2024-11-28 11:01:29.794343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf180 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf0c0 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f000 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8ef40 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7ee80 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6edc0 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5ed00 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4ec40 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3eb80 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2eac0 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.010 [2024-11-28 11:01:29.794780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1ea00 len:0x10000 key:0x184600 00:28:51.010 [2024-11-28 11:01:29.794797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.794819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0e940 len:0x10000 key:0x184600 00:28:51.011 [2024-11-28 11:01:29.794836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.794858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031effc0 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.794875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.794898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff00 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.794915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.794938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cfe40 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.794955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.794977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfd80 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.794997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afcc0 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fc00 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fb40 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fa80 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316f9c0 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315f900 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314f840 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313f780 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312f6c0 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f600 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f540 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff480 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef3c0 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df300 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf240 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf180 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af0c0 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f000 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308ef40 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307ee80 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306edc0 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305ed00 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.011 [2024-11-28 11:01:29.795908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304ec40 len:0x10000 key:0x184700 00:28:51.011 [2024-11-28 11:01:29.795928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.795953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303eb80 len:0x10000 key:0x184700 00:28:51.012 [2024-11-28 11:01:29.795971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.795993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302eac0 len:0x10000 key:0x184700 00:28:51.012 [2024-11-28 11:01:29.796011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301ea00 len:0x10000 key:0x184700 00:28:51.012 [2024-11-28 11:01:29.796051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300e940 len:0x10000 key:0x184700 00:28:51.012 [2024-11-28 11:01:29.796090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033effc0 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff00 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cfe40 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfd80 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afcc0 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fc00 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fb40 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fa80 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336f9c0 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335f900 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334f840 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333f780 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332f6c0 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f600 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f540 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff480 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef3c0 len:0x10000 key:0x184400 00:28:51.012 [2024-11-28 11:01:29.796768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.012 [2024-11-28 11:01:29.796791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df300 len:0x10000 key:0x184400 00:28:51.013 [2024-11-28 11:01:29.796808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-28 11:01:29.796831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf240 len:0x10000 key:0x184400 00:28:51.013 [2024-11-28 11:01:29.796852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-28 11:01:29.796875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf180 len:0x10000 key:0x184400 00:28:51.013 [2024-11-28 11:01:29.796892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-28 11:01:29.796914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf240 len:0x10000 key:0x184600 00:28:51.013 [2024-11-28 11:01:29.796931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.013 [2024-11-28 11:01:29.828041] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.828129] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.828149] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.828166] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.828181] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.828197] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.828212] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.828228] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.828242] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.828258] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.828273] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:51.013 [2024-11-28 11:01:29.835143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:51.013 [2024-11-28 11:01:29.835181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:51.013 [2024-11-28 11:01:29.836061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:51.013 [2024-11-28 11:01:29.836092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:51.013 [2024-11-28 11:01:29.836109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:51.013 [2024-11-28 11:01:29.836125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:51.013 task offset: 35840 on job bdev=Nvme1n1 fails 00:28:51.013 00:28:51.013 Latency(us) 00:28:51.013 [2024-11-28T10:01:29.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.013 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme1n1 ended in about 1.94 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme1n1 : 1.94 132.03 8.25 33.01 0.00 384462.52 25270.68 1060320.05 00:28:51.013 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme2n1 ended in about 1.94 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme2n1 : 1.94 131.97 8.25 32.99 0.00 380917.51 31457.28 1053609.16 00:28:51.013 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme3n1 ended in about 1.94 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme3n1 : 1.94 137.06 8.57 32.98 0.00 366187.09 4980.74 1053609.16 00:28:51.013 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme4n1 ended in about 1.94 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme4n1 : 1.94 141.64 8.85 32.96 0.00 353438.08 7811.89 1053609.16 00:28:51.013 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme5n1 ended in about 1.94 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme5n1 : 1.94 135.91 8.49 32.95 0.00 361923.20 10066.33 1053609.16 00:28:51.013 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme6n1 ended in about 1.94 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme6n1 : 1.94 135.85 8.49 32.93 0.00 358754.29 13631.49 1053609.16 00:28:51.013 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme7n1 ended in about 1.94 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme7n1 : 1.94 134.77 8.42 32.92 0.00 357657.37 16462.64 1046898.28 00:28:51.013 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme8n1 ended in about 1.94 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme8n1 : 1.94 132.65 8.29 32.91 0.00 358995.30 19398.66 1046898.28 00:28:51.013 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme9n1 ended in about 1.90 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme9n1 : 1.90 134.62 8.41 33.65 0.00 351215.29 51170.51 1087163.60 00:28:51.013 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.013 Job: Nvme10n1 ended in about 1.91 seconds with error 00:28:51.013 Verification LBA range: start 0x0 length 0x400 00:28:51.013 Nvme10n1 : 1.91 67.10 4.19 33.55 0.00 581420.10 54106.52 1073741.82 00:28:51.013 [2024-11-28T10:01:29.894Z] =================================================================================================================== 00:28:51.013 [2024-11-28T10:01:29.894Z] Total : 1283.60 80.23 330.85 0.00 377016.13 4980.74 1087163.60 00:28:51.273 [2024-11-28 11:01:29.965004] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:51.274 [2024-11-28 11:01:29.965074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:51.274 [2024-11-28 11:01:29.965107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:51.274 [2024-11-28 11:01:29.965130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:51.274 [2024-11-28 11:01:29.965145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:51.274 [2024-11-28 11:01:29.975823] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.274 [2024-11-28 11:01:29.975856] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.274 [2024-11-28 11:01:29.975879] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:28:51.274 [2024-11-28 11:01:29.975969] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.274 [2024-11-28 11:01:29.975986] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.274 [2024-11-28 11:01:29.975996] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200007fff240 00:28:51.274 [2024-11-28 11:01:29.981151] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.274 [2024-11-28 11:01:29.981178] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.274 [2024-11-28 11:01:29.981190] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177bb200 00:28:51.274 [2024-11-28 11:01:29.981272] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.274 [2024-11-28 11:01:29.981286] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.274 [2024-11-28 11:01:29.981295] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177d3dc0 00:28:51.274 [2024-11-28 11:01:29.981418] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.274 [2024-11-28 11:01:29.981435] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.274 [2024-11-28 11:01:29.981447] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177be180 00:28:51.274 [2024-11-28 11:01:29.981546] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.274 [2024-11-28 11:01:29.981563] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.274 [2024-11-28 11:01:29.981575] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177c7500 00:28:51.274 [2024-11-28 11:01:29.982438] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.274 [2024-11-28 11:01:29.982460] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.274 [2024-11-28 11:01:29.982473] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001778e680 00:28:51.274 [2024-11-28 11:01:29.982568] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.274 [2024-11-28 11:01:29.982585] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.274 [2024-11-28 11:01:29.982597] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017783a80 00:28:51.274 [2024-11-28 11:01:29.982684] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.274 [2024-11-28 11:01:29.982701] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.274 [2024-11-28 11:01:29.982713] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177a3bc0 00:28:51.274 [2024-11-28 11:01:29.982813] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.274 [2024-11-28 11:01:29.982830] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.274 [2024-11-28 11:01:29.982842] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001778ee00 00:28:52.213 [2024-11-28 11:01:30.980303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.213 [2024-11-28 11:01:30.980357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:52.213 [2024-11-28 11:01:30.981734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.213 [2024-11-28 11:01:30.981752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:52.213 [2024-11-28 11:01:30.981808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:52.213 [2024-11-28 11:01:30.981822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:52.213 [2024-11-28 11:01:30.981839] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:28:52.213 [2024-11-28 11:01:30.981858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:52.213 [2024-11-28 11:01:30.981883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:52.213 [2024-11-28 11:01:30.981895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:52.213 [2024-11-28 11:01:30.981906] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:28:52.213 [2024-11-28 11:01:30.981918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:52.213 [2024-11-28 11:01:30.985456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.213 [2024-11-28 11:01:30.985481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:52.213 [2024-11-28 11:01:30.987078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.213 [2024-11-28 11:01:30.987095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:52.213 [2024-11-28 11:01:30.988561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.213 [2024-11-28 11:01:30.988577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:52.213 [2024-11-28 11:01:30.989987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.213 [2024-11-28 11:01:30.990004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:52.213 [2024-11-28 11:01:30.991221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.213 [2024-11-28 11:01:30.991237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:52.213 [2024-11-28 11:01:30.992503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.213 [2024-11-28 11:01:30.992519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:52.213 [2024-11-28 11:01:30.993617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.213 [2024-11-28 11:01:30.993638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:52.213 [2024-11-28 11:01:30.995013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.213 [2024-11-28 11:01:30.995034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:52.213 [2024-11-28 11:01:30.995048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:52.213 [2024-11-28 11:01:30.995067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:52.213 [2024-11-28 11:01:30.995082] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:28:52.213 [2024-11-28 11:01:30.995099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:52.213 [2024-11-28 11:01:30.995122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:52.213 [2024-11-28 11:01:30.995138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:52.213 [2024-11-28 11:01:30.995152] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:28:52.214 [2024-11-28 11:01:30.995167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:52.214 [2024-11-28 11:01:30.995187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:52.214 [2024-11-28 11:01:30.995202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:52.214 [2024-11-28 11:01:30.995216] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:28:52.214 [2024-11-28 11:01:30.995232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:52.214 [2024-11-28 11:01:30.995249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:52.214 [2024-11-28 11:01:30.995264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:52.214 [2024-11-28 11:01:30.995278] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:28:52.214 [2024-11-28 11:01:30.995293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:52.214 [2024-11-28 11:01:30.995410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:52.214 [2024-11-28 11:01:30.995429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:52.214 [2024-11-28 11:01:30.995443] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:28:52.214 [2024-11-28 11:01:30.995458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:52.214 [2024-11-28 11:01:30.995476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:52.214 [2024-11-28 11:01:30.995491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:52.214 [2024-11-28 11:01:30.995505] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:28:52.214 [2024-11-28 11:01:30.995521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:52.214 [2024-11-28 11:01:30.995539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:52.214 [2024-11-28 11:01:30.995555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:52.214 [2024-11-28 11:01:30.995569] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:28:52.214 [2024-11-28 11:01:30.995584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:52.214 [2024-11-28 11:01:30.995602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:52.214 [2024-11-28 11:01:30.995682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:52.214 [2024-11-28 11:01:30.995700] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:28:52.214 [2024-11-28 11:01:30.995715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:53.594 11:01:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1808756 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1808756 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1808756 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:54.533 rmmod nvme_rdma 00:28:54.533 rmmod nvme_fabrics 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1808433 ']' 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1808433 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1808433 ']' 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1808433 00:28:54.533 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1808433) - No such process 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1808433 is not found' 00:28:54.533 Process with pid 1808433 is not found 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:54.533 00:28:54.533 real 0m9.425s 00:28:54.533 user 0m34.131s 00:28:54.533 sys 0m1.944s 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:54.533 ************************************ 00:28:54.533 END TEST nvmf_shutdown_tc3 00:28:54.533 ************************************ 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:54.533 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:54.534 ************************************ 00:28:54.534 START TEST nvmf_shutdown_tc4 00:28:54.534 ************************************ 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:54.534 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:54.534 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:54.534 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:54.534 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:54.534 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:54.535 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:54.535 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:54.794 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:54.795 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:54.795 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:54.795 altname enp217s0f0np0 00:28:54.795 altname ens818f0np0 00:28:54.795 inet 192.168.100.8/24 scope global mlx_0_0 00:28:54.795 valid_lft forever preferred_lft forever 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:54.795 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:54.795 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:54.795 altname enp217s0f1np1 00:28:54.795 altname ens818f1np1 00:28:54.795 inet 192.168.100.9/24 scope global mlx_0_1 00:28:54.795 valid_lft forever preferred_lft forever 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:54.795 192.168.100.9' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:54.795 192.168.100.9' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:54.795 192.168.100.9' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:54.795 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:54.796 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:54.796 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1810192 00:28:54.796 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1810192 00:28:54.796 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:54.796 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1810192 ']' 00:28:54.796 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.796 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.796 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.796 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.796 11:01:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.055 [2024-11-28 11:01:33.685578] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:55.055 [2024-11-28 11:01:33.685680] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.055 [2024-11-28 11:01:33.839158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.315 [2024-11-28 11:01:33.944573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.315 [2024-11-28 11:01:33.944619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.315 [2024-11-28 11:01:33.944632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.315 [2024-11-28 11:01:33.944645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.315 [2024-11-28 11:01:33.944655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.315 [2024-11-28 11:01:33.947129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.315 [2024-11-28 11:01:33.947161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.315 [2024-11-28 11:01:33.947248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.315 [2024-11-28 11:01:33.947273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:55.883 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.883 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:55.883 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:55.883 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.883 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.883 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.883 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:55.883 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.883 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.883 [2024-11-28 11:01:34.576848] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fe6a4dbd940) succeed. 00:28:55.883 [2024-11-28 11:01:34.586886] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fe6a4d79940) succeed. 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.142 11:01:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.142 Malloc1 00:28:56.142 [2024-11-28 11:01:35.011715] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:56.400 Malloc2 00:28:56.400 Malloc3 00:28:56.400 Malloc4 00:28:56.657 Malloc5 00:28:56.657 Malloc6 00:28:56.914 Malloc7 00:28:56.914 Malloc8 00:28:56.914 Malloc9 00:28:57.173 Malloc10 00:28:57.173 11:01:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.173 11:01:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:57.173 11:01:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:57.173 11:01:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:57.173 11:01:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1810516 00:28:57.173 11:01:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:57.173 11:01:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:28:57.173 [2024-11-28 11:01:36.011628] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1810192 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1810192 ']' 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1810192 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1810192 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1810192' 00:29:02.577 killing process with pid 1810192 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1810192 00:29:02.577 11:01:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1810192 00:29:02.577 NVMe io qpair process completion error 00:29:02.577 NVMe io qpair process completion error 00:29:02.577 NVMe io qpair process completion error 00:29:02.577 NVMe io qpair process completion error 00:29:02.577 NVMe io qpair process completion error 00:29:02.577 NVMe io qpair process completion error 00:29:02.577 NVMe io qpair process completion error 00:29:02.577 NVMe io qpair process completion error 00:29:02.577 NVMe io qpair process completion error 00:29:02.577 NVMe io qpair process completion error 00:29:02.577 NVMe io qpair process completion error 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 [2024-11-28 11:01:42.105979] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 [2024-11-28 11:01:42.128845] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 [2024-11-28 11:01:42.153873] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 [2024-11-28 11:01:42.181351] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 [2024-11-28 11:01:42.204432] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 [2024-11-28 11:01:42.229493] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 [2024-11-28 11:01:42.254764] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Submitting Keep Alive failed 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 starting I/O failed: -6 00:29:03.524 [2024-11-28 11:01:42.279915] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Submitting Keep Alive failed 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.524 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 starting I/O failed: -6 00:29:03.525 [2024-11-28 11:01:42.306016] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 starting I/O failed: -6 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 starting I/O failed: -6 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 starting I/O failed: -6 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 starting I/O failed: -6 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 starting I/O failed: -6 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 starting I/O failed: -6 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 starting I/O failed: -6 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 starting I/O failed: -6 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 [2024-11-28 11:01:42.331204] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Submitting Keep Alive failed 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Write completed with error (sct=0, sc=8) 00:29:03.525 Initializing NVMe Controllers 00:29:03.525 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:29:03.525 Controller IO queue size 128, less than required. 00:29:03.525 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.525 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:29:03.526 Controller IO queue size 128, less than required. 00:29:03.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.526 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:29:03.526 Controller IO queue size 128, less than required. 00:29:03.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.526 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.526 Controller IO queue size 128, less than required. 00:29:03.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.526 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:29:03.526 Controller IO queue size 128, less than required. 00:29:03.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.526 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:29:03.526 Controller IO queue size 128, less than required. 00:29:03.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.526 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:29:03.526 Controller IO queue size 128, less than required. 00:29:03.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.526 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:29:03.526 Controller IO queue size 128, less than required. 00:29:03.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.526 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:29:03.526 Controller IO queue size 128, less than required. 00:29:03.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.526 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:29:03.526 Controller IO queue size 128, less than required. 00:29:03.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:03.526 Initialization complete. Launching workers. 00:29:03.526 ======================================================== 00:29:03.526 Latency(us) 00:29:03.526 Device Information : IOPS MiB/s Average min max 00:29:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1432.64 61.56 90569.54 127.17 1444659.88 00:29:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1429.24 61.41 89606.41 124.59 1259109.83 00:29:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1446.37 62.15 88749.06 122.36 1249388.79 00:29:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1419.75 61.00 90684.95 121.72 1312055.89 00:29:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1433.48 61.59 90086.04 133.72 1324315.78 00:29:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1432.81 61.57 90362.55 124.85 1338636.69 00:29:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1442.30 61.97 89991.69 119.35 1314362.84 00:29:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1453.83 62.47 89527.12 125.40 1279534.05 00:29:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1407.37 60.47 92744.79 121.61 1428457.45 00:29:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1472.49 63.27 88878.72 124.51 1264529.85 00:29:03.526 ======================================================== 00:29:03.526 Total : 14370.28 617.47 90109.37 119.35 1444659.88 00:29:03.526 00:29:03.526 [2024-11-28 11:01:42.371699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.526 [2024-11-28 11:01:42.371739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:03.526 [2024-11-28 11:01:42.373997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.526 [2024-11-28 11:01:42.374018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:03.526 [2024-11-28 11:01:42.375953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.526 [2024-11-28 11:01:42.375973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:03.526 [2024-11-28 11:01:42.377984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.526 [2024-11-28 11:01:42.378006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:03.526 [2024-11-28 11:01:42.380307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.526 [2024-11-28 11:01:42.380329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:03.526 [2024-11-28 11:01:42.382223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.526 [2024-11-28 11:01:42.382246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:03.526 [2024-11-28 11:01:42.384156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.526 [2024-11-28 11:01:42.384179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:03.526 [2024-11-28 11:01:42.386067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.526 [2024-11-28 11:01:42.386089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:03.526 [2024-11-28 11:01:42.388004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.526 [2024-11-28 11:01:42.388027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:03.785 [2024-11-28 11:01:42.417970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.785 [2024-11-28 11:01:42.418005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:03.785 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:05.691 11:01:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1810516 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1810516 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1810516 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:07.071 rmmod nvme_rdma 00:29:07.071 rmmod nvme_fabrics 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1810192 ']' 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1810192 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1810192 ']' 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1810192 00:29:07.071 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1810192) - No such process 00:29:07.071 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1810192 is not found' 00:29:07.072 Process with pid 1810192 is not found 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:07.072 00:29:07.072 real 0m12.252s 00:29:07.072 user 0m45.915s 00:29:07.072 sys 0m1.576s 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.072 ************************************ 00:29:07.072 END TEST nvmf_shutdown_tc4 00:29:07.072 ************************************ 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:07.072 00:29:07.072 real 0m53.342s 00:29:07.072 user 2m52.767s 00:29:07.072 sys 0m13.628s 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.072 ************************************ 00:29:07.072 END TEST nvmf_shutdown 00:29:07.072 ************************************ 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:07.072 ************************************ 00:29:07.072 START TEST nvmf_nsid 00:29:07.072 ************************************ 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:29:07.072 * Looking for test storage... 00:29:07.072 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:07.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.072 --rc genhtml_branch_coverage=1 00:29:07.072 --rc genhtml_function_coverage=1 00:29:07.072 --rc genhtml_legend=1 00:29:07.072 --rc geninfo_all_blocks=1 00:29:07.072 --rc geninfo_unexecuted_blocks=1 00:29:07.072 00:29:07.072 ' 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:07.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.072 --rc genhtml_branch_coverage=1 00:29:07.072 --rc genhtml_function_coverage=1 00:29:07.072 --rc genhtml_legend=1 00:29:07.072 --rc geninfo_all_blocks=1 00:29:07.072 --rc geninfo_unexecuted_blocks=1 00:29:07.072 00:29:07.072 ' 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:07.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.072 --rc genhtml_branch_coverage=1 00:29:07.072 --rc genhtml_function_coverage=1 00:29:07.072 --rc genhtml_legend=1 00:29:07.072 --rc geninfo_all_blocks=1 00:29:07.072 --rc geninfo_unexecuted_blocks=1 00:29:07.072 00:29:07.072 ' 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:07.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.072 --rc genhtml_branch_coverage=1 00:29:07.072 --rc genhtml_function_coverage=1 00:29:07.072 --rc genhtml_legend=1 00:29:07.072 --rc geninfo_all_blocks=1 00:29:07.072 --rc geninfo_unexecuted_blocks=1 00:29:07.072 00:29:07.072 ' 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:07.072 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:07.333 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.333 11:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:15.459 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:15.459 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:15.460 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:15.460 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:15.460 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:15.460 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:15.721 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:15.721 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:15.721 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.721 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:15.721 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:15.721 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:15.721 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:15.721 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.721 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:15.721 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:15.722 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:15.722 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:15.722 altname enp217s0f0np0 00:29:15.722 altname ens818f0np0 00:29:15.722 inet 192.168.100.8/24 scope global mlx_0_0 00:29:15.722 valid_lft forever preferred_lft forever 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:15.722 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:15.722 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:15.722 altname enp217s0f1np1 00:29:15.722 altname ens818f1np1 00:29:15.722 inet 192.168.100.9/24 scope global mlx_0_1 00:29:15.722 valid_lft forever preferred_lft forever 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:15.722 192.168.100.9' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:15.722 192.168.100.9' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:15.722 192.168.100.9' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1816292 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1816292 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1816292 ']' 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.722 11:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:15.722 [2024-11-28 11:01:54.583913] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:29:15.722 [2024-11-28 11:01:54.584007] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.982 [2024-11-28 11:01:54.735964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.982 [2024-11-28 11:01:54.828343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.982 [2024-11-28 11:01:54.828393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.982 [2024-11-28 11:01:54.828406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.982 [2024-11-28 11:01:54.828419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.982 [2024-11-28 11:01:54.828428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.982 [2024-11-28 11:01:54.829789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1816346 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:29:16.551 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=6ab89845-5022-4c83-a2a1-5735c035b5c9 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=7d33dd43-9556-4967-a486-1ffc8c0ffaf9 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=9940a814-13d7-48b4-92c7-acb6d7a00692 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:16.810 null0 00:29:16.810 null1 00:29:16.810 null2 00:29:16.810 [2024-11-28 11:01:55.502781] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000295c0/0x7fec3a2a6940) succeed. 00:29:16.810 [2024-11-28 11:01:55.512094] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029740/0x7fec3a262940) succeed. 00:29:16.810 [2024-11-28 11:01:55.512215] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:29:16.810 [2024-11-28 11:01:55.512294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816346 ] 00:29:16.810 [2024-11-28 11:01:55.620044] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1816346 /var/tmp/tgt2.sock 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1816346 ']' 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:16.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.810 11:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:16.810 [2024-11-28 11:01:55.668029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.070 [2024-11-28 11:01:55.768565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.638 11:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.638 11:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:17.638 11:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:18.206 [2024-11-28 11:01:56.864441] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7ff2f7d61940) succeed. 00:29:18.206 [2024-11-28 11:01:56.875413] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7ff2f7d1b940) succeed. 00:29:18.206 [2024-11-28 11:01:56.952202] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:18.206 nvme0n1 nvme0n2 00:29:18.206 nvme1n1 00:29:18.206 11:01:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:18.206 11:01:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:18.206 11:01:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e 00:29:26.339 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:26.339 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:26.339 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:26.339 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:26.339 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:26.339 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:26.339 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 6ab89845-5022-4c83-a2a1-5735c035b5c9 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6ab8984550224c83a2a15735c035b5c9 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6AB8984550224C83A2A15735C035B5C9 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 6AB8984550224C83A2A15735C035B5C9 == \6\A\B\8\9\8\4\5\5\0\2\2\4\C\8\3\A\2\A\1\5\7\3\5\C\0\3\5\B\5\C\9 ]] 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 7d33dd43-9556-4967-a486-1ffc8c0ffaf9 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:26.340 11:02:03 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:26.340 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7d33dd4395564967a4861ffc8c0ffaf9 00:29:26.340 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7D33DD4395564967A4861FFC8C0FFAF9 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 7D33DD4395564967A4861FFC8C0FFAF9 == \7\D\3\3\D\D\4\3\9\5\5\6\4\9\6\7\A\4\8\6\1\F\F\C\8\C\0\F\F\A\F\9 ]] 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 9940a814-13d7-48b4-92c7-acb6d7a00692 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9940a81413d748b492c7acb6d7a00692 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9940A81413D748B492C7ACB6D7A00692 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 9940A81413D748B492C7ACB6D7A00692 == \9\9\4\0\A\8\1\4\1\3\D\7\4\8\B\4\9\2\C\7\A\C\B\6\D\7\A\0\0\6\9\2 ]] 00:29:26.341 11:02:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1816346 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1816346 ']' 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1816346 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1816346 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1816346' 00:29:32.923 killing process with pid 1816346 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1816346 00:29:32.923 11:02:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1816346 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:34.830 rmmod nvme_rdma 00:29:34.830 rmmod nvme_fabrics 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1816292 ']' 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1816292 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1816292 ']' 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1816292 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1816292 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1816292' 00:29:34.830 killing process with pid 1816292 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1816292 00:29:34.830 11:02:13 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1816292 00:29:36.211 11:02:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.211 11:02:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:36.211 00:29:36.211 real 0m29.146s 00:29:36.211 user 0m40.339s 00:29:36.211 sys 0m8.165s 00:29:36.211 11:02:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:36.211 11:02:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:36.211 ************************************ 00:29:36.211 END TEST nvmf_nsid 00:29:36.211 ************************************ 00:29:36.211 11:02:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:36.211 00:29:36.211 real 17m27.753s 00:29:36.211 user 51m36.349s 00:29:36.211 sys 3m46.256s 00:29:36.211 11:02:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:36.211 11:02:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:36.211 ************************************ 00:29:36.211 END TEST nvmf_target_extra 00:29:36.211 ************************************ 00:29:36.211 11:02:14 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:29:36.211 11:02:14 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:36.211 11:02:14 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:36.211 11:02:14 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:36.211 ************************************ 00:29:36.211 START TEST nvmf_host 00:29:36.211 ************************************ 00:29:36.211 11:02:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:29:36.211 * Looking for test storage... 00:29:36.471 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:29:36.471 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:36.471 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:36.471 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:36.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.472 --rc genhtml_branch_coverage=1 00:29:36.472 --rc genhtml_function_coverage=1 00:29:36.472 --rc genhtml_legend=1 00:29:36.472 --rc geninfo_all_blocks=1 00:29:36.472 --rc geninfo_unexecuted_blocks=1 00:29:36.472 00:29:36.472 ' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:36.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.472 --rc genhtml_branch_coverage=1 00:29:36.472 --rc genhtml_function_coverage=1 00:29:36.472 --rc genhtml_legend=1 00:29:36.472 --rc geninfo_all_blocks=1 00:29:36.472 --rc geninfo_unexecuted_blocks=1 00:29:36.472 00:29:36.472 ' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:36.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.472 --rc genhtml_branch_coverage=1 00:29:36.472 --rc genhtml_function_coverage=1 00:29:36.472 --rc genhtml_legend=1 00:29:36.472 --rc geninfo_all_blocks=1 00:29:36.472 --rc geninfo_unexecuted_blocks=1 00:29:36.472 00:29:36.472 ' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:36.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.472 --rc genhtml_branch_coverage=1 00:29:36.472 --rc genhtml_function_coverage=1 00:29:36.472 --rc genhtml_legend=1 00:29:36.472 --rc geninfo_all_blocks=1 00:29:36.472 --rc geninfo_unexecuted_blocks=1 00:29:36.472 00:29:36.472 ' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:36.472 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:36.472 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.472 ************************************ 00:29:36.472 START TEST nvmf_multicontroller 00:29:36.472 ************************************ 00:29:36.473 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:29:36.473 * Looking for test storage... 00:29:36.732 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:36.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.732 --rc genhtml_branch_coverage=1 00:29:36.732 --rc genhtml_function_coverage=1 00:29:36.732 --rc genhtml_legend=1 00:29:36.732 --rc geninfo_all_blocks=1 00:29:36.732 --rc geninfo_unexecuted_blocks=1 00:29:36.732 00:29:36.732 ' 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:36.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.732 --rc genhtml_branch_coverage=1 00:29:36.732 --rc genhtml_function_coverage=1 00:29:36.732 --rc genhtml_legend=1 00:29:36.732 --rc geninfo_all_blocks=1 00:29:36.732 --rc geninfo_unexecuted_blocks=1 00:29:36.732 00:29:36.732 ' 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:36.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.732 --rc genhtml_branch_coverage=1 00:29:36.732 --rc genhtml_function_coverage=1 00:29:36.732 --rc genhtml_legend=1 00:29:36.732 --rc geninfo_all_blocks=1 00:29:36.732 --rc geninfo_unexecuted_blocks=1 00:29:36.732 00:29:36.732 ' 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:36.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.732 --rc genhtml_branch_coverage=1 00:29:36.732 --rc genhtml_function_coverage=1 00:29:36.732 --rc genhtml_legend=1 00:29:36.732 --rc geninfo_all_blocks=1 00:29:36.732 --rc geninfo_unexecuted_blocks=1 00:29:36.732 00:29:36.732 ' 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.732 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:36.733 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:29:36.733 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:29:36.733 00:29:36.733 real 0m0.216s 00:29:36.733 user 0m0.116s 00:29:36.733 sys 0m0.118s 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.733 ************************************ 00:29:36.733 END TEST nvmf_multicontroller 00:29:36.733 ************************************ 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.733 ************************************ 00:29:36.733 START TEST nvmf_aer 00:29:36.733 ************************************ 00:29:36.733 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:29:36.993 * Looking for test storage... 00:29:36.993 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:36.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.993 --rc genhtml_branch_coverage=1 00:29:36.993 --rc genhtml_function_coverage=1 00:29:36.993 --rc genhtml_legend=1 00:29:36.993 --rc geninfo_all_blocks=1 00:29:36.993 --rc geninfo_unexecuted_blocks=1 00:29:36.993 00:29:36.993 ' 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:36.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.993 --rc genhtml_branch_coverage=1 00:29:36.993 --rc genhtml_function_coverage=1 00:29:36.993 --rc genhtml_legend=1 00:29:36.993 --rc geninfo_all_blocks=1 00:29:36.993 --rc geninfo_unexecuted_blocks=1 00:29:36.993 00:29:36.993 ' 00:29:36.993 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:36.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.993 --rc genhtml_branch_coverage=1 00:29:36.993 --rc genhtml_function_coverage=1 00:29:36.993 --rc genhtml_legend=1 00:29:36.994 --rc geninfo_all_blocks=1 00:29:36.994 --rc geninfo_unexecuted_blocks=1 00:29:36.994 00:29:36.994 ' 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:36.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.994 --rc genhtml_branch_coverage=1 00:29:36.994 --rc genhtml_function_coverage=1 00:29:36.994 --rc genhtml_legend=1 00:29:36.994 --rc geninfo_all_blocks=1 00:29:36.994 --rc geninfo_unexecuted_blocks=1 00:29:36.994 00:29:36.994 ' 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:36.994 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:36.994 11:02:15 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:45.122 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:45.123 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:45.123 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:45.123 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:45.123 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:45.123 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:45.123 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:45.123 altname enp217s0f0np0 00:29:45.123 altname ens818f0np0 00:29:45.123 inet 192.168.100.8/24 scope global mlx_0_0 00:29:45.123 valid_lft forever preferred_lft forever 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:45.123 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:45.123 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:45.123 altname enp217s0f1np1 00:29:45.123 altname ens818f1np1 00:29:45.123 inet 192.168.100.9/24 scope global mlx_0_1 00:29:45.123 valid_lft forever preferred_lft forever 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:45.123 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:45.124 192.168.100.9' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:45.124 192.168.100.9' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:45.124 192.168.100.9' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1823873 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1823873 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1823873 ']' 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.124 11:02:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:45.384 [2024-11-28 11:02:24.070570] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:29:45.384 [2024-11-28 11:02:24.070683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.384 [2024-11-28 11:02:24.224912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:45.643 [2024-11-28 11:02:24.324631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.643 [2024-11-28 11:02:24.324681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.643 [2024-11-28 11:02:24.324693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.643 [2024-11-28 11:02:24.324722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.643 [2024-11-28 11:02:24.324732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.643 [2024-11-28 11:02:24.327197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.643 [2024-11-28 11:02:24.327273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.643 [2024-11-28 11:02:24.327293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.643 [2024-11-28 11:02:24.327302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.212 11:02:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.212 11:02:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:46.212 11:02:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:46.212 11:02:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:46.212 11:02:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.212 11:02:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.212 11:02:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:46.212 11:02:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.212 11:02:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.212 [2024-11-28 11:02:24.977943] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f7af6fbd940) succeed. 00:29:46.212 [2024-11-28 11:02:24.987702] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f7af6f79940) succeed. 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.471 Malloc0 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.471 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.730 [2024-11-28 11:02:25.352376] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.730 [ 00:29:46.730 { 00:29:46.730 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:46.730 "subtype": "Discovery", 00:29:46.730 "listen_addresses": [], 00:29:46.730 "allow_any_host": true, 00:29:46.730 "hosts": [] 00:29:46.730 }, 00:29:46.730 { 00:29:46.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:46.730 "subtype": "NVMe", 00:29:46.730 "listen_addresses": [ 00:29:46.730 { 00:29:46.730 "trtype": "RDMA", 00:29:46.730 "adrfam": "IPv4", 00:29:46.730 "traddr": "192.168.100.8", 00:29:46.730 "trsvcid": "4420" 00:29:46.730 } 00:29:46.730 ], 00:29:46.730 "allow_any_host": true, 00:29:46.730 "hosts": [], 00:29:46.730 "serial_number": "SPDK00000000000001", 00:29:46.730 "model_number": "SPDK bdev Controller", 00:29:46.730 "max_namespaces": 2, 00:29:46.730 "min_cntlid": 1, 00:29:46.730 "max_cntlid": 65519, 00:29:46.730 "namespaces": [ 00:29:46.730 { 00:29:46.730 "nsid": 1, 00:29:46.730 "bdev_name": "Malloc0", 00:29:46.730 "name": "Malloc0", 00:29:46.730 "nguid": "DE66C6F17A674F689252CEAFCEFC8A9B", 00:29:46.730 "uuid": "de66c6f1-7a67-4f68-9252-ceafcefc8a9b" 00:29:46.730 } 00:29:46.730 ] 00:29:46.730 } 00:29:46.730 ] 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1824153 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:46.730 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:46.731 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:29:46.731 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:29:46.731 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:46.990 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:46.991 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:29:46.991 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:29:46.991 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:46.991 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:46.991 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:46.991 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:46.991 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:46.991 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.991 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.250 Malloc1 00:29:47.250 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.250 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:47.250 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.250 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.250 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.250 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:47.250 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.250 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.250 [ 00:29:47.250 { 00:29:47.250 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:47.250 "subtype": "Discovery", 00:29:47.250 "listen_addresses": [], 00:29:47.250 "allow_any_host": true, 00:29:47.250 "hosts": [] 00:29:47.250 }, 00:29:47.250 { 00:29:47.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.250 "subtype": "NVMe", 00:29:47.250 "listen_addresses": [ 00:29:47.250 { 00:29:47.250 "trtype": "RDMA", 00:29:47.250 "adrfam": "IPv4", 00:29:47.250 "traddr": "192.168.100.8", 00:29:47.250 "trsvcid": "4420" 00:29:47.250 } 00:29:47.250 ], 00:29:47.250 "allow_any_host": true, 00:29:47.250 "hosts": [], 00:29:47.250 "serial_number": "SPDK00000000000001", 00:29:47.250 "model_number": "SPDK bdev Controller", 00:29:47.250 "max_namespaces": 2, 00:29:47.250 "min_cntlid": 1, 00:29:47.250 "max_cntlid": 65519, 00:29:47.250 "namespaces": [ 00:29:47.250 { 00:29:47.250 "nsid": 1, 00:29:47.250 "bdev_name": "Malloc0", 00:29:47.250 "name": "Malloc0", 00:29:47.250 "nguid": "DE66C6F17A674F689252CEAFCEFC8A9B", 00:29:47.250 "uuid": "de66c6f1-7a67-4f68-9252-ceafcefc8a9b" 00:29:47.250 }, 00:29:47.250 { 00:29:47.250 "nsid": 2, 00:29:47.250 "bdev_name": "Malloc1", 00:29:47.250 "name": "Malloc1", 00:29:47.250 "nguid": "CA319E5E109C4DFE805B664B352A4033", 00:29:47.250 "uuid": "ca319e5e-109c-4dfe-805b-664b352a4033" 00:29:47.250 } 00:29:47.250 ] 00:29:47.250 } 00:29:47.250 ] 00:29:47.250 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.250 11:02:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1824153 00:29:47.250 Asynchronous Event Request test 00:29:47.250 Attaching to 192.168.100.8 00:29:47.250 Attached to 192.168.100.8 00:29:47.250 Registering asynchronous event callbacks... 00:29:47.250 Starting namespace attribute notice tests for all controllers... 00:29:47.251 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:47.251 aer_cb - Changed Namespace 00:29:47.251 Cleaning up... 00:29:47.251 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:47.251 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.251 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.510 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.510 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:47.510 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.510 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:47.770 rmmod nvme_rdma 00:29:47.770 rmmod nvme_fabrics 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1823873 ']' 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1823873 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1823873 ']' 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1823873 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1823873 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1823873' 00:29:47.770 killing process with pid 1823873 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1823873 00:29:47.770 11:02:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1823873 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:49.679 00:29:49.679 real 0m12.702s 00:29:49.679 user 0m16.221s 00:29:49.679 sys 0m7.182s 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.679 ************************************ 00:29:49.679 END TEST nvmf_aer 00:29:49.679 ************************************ 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.679 ************************************ 00:29:49.679 START TEST nvmf_async_init 00:29:49.679 ************************************ 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:29:49.679 * Looking for test storage... 00:29:49.679 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:49.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.679 --rc genhtml_branch_coverage=1 00:29:49.679 --rc genhtml_function_coverage=1 00:29:49.679 --rc genhtml_legend=1 00:29:49.679 --rc geninfo_all_blocks=1 00:29:49.679 --rc geninfo_unexecuted_blocks=1 00:29:49.679 00:29:49.679 ' 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:49.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.679 --rc genhtml_branch_coverage=1 00:29:49.679 --rc genhtml_function_coverage=1 00:29:49.679 --rc genhtml_legend=1 00:29:49.679 --rc geninfo_all_blocks=1 00:29:49.679 --rc geninfo_unexecuted_blocks=1 00:29:49.679 00:29:49.679 ' 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:49.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.679 --rc genhtml_branch_coverage=1 00:29:49.679 --rc genhtml_function_coverage=1 00:29:49.679 --rc genhtml_legend=1 00:29:49.679 --rc geninfo_all_blocks=1 00:29:49.679 --rc geninfo_unexecuted_blocks=1 00:29:49.679 00:29:49.679 ' 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:49.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.679 --rc genhtml_branch_coverage=1 00:29:49.679 --rc genhtml_function_coverage=1 00:29:49.679 --rc genhtml_legend=1 00:29:49.679 --rc geninfo_all_blocks=1 00:29:49.679 --rc geninfo_unexecuted_blocks=1 00:29:49.679 00:29:49.679 ' 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.679 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:49.680 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.680 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=bb777b304ca74e0d8d160ecc0e96ff8e 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.940 11:02:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:58.069 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:58.069 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:58.069 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:58.069 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:58.069 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:58.070 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:58.070 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:58.070 altname enp217s0f0np0 00:29:58.070 altname ens818f0np0 00:29:58.070 inet 192.168.100.8/24 scope global mlx_0_0 00:29:58.070 valid_lft forever preferred_lft forever 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:58.070 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:58.070 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:58.070 altname enp217s0f1np1 00:29:58.070 altname ens818f1np1 00:29:58.070 inet 192.168.100.9/24 scope global mlx_0_1 00:29:58.070 valid_lft forever preferred_lft forever 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:58.070 192.168.100.9' 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:58.070 192.168.100.9' 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:58.070 192.168.100.9' 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:29:58.070 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1828605 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1828605 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1828605 ']' 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:58.071 11:02:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:58.330 [2024-11-28 11:02:37.003503] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:29:58.330 [2024-11-28 11:02:37.003603] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.330 [2024-11-28 11:02:37.156718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.590 [2024-11-28 11:02:37.249754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.590 [2024-11-28 11:02:37.249804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.590 [2024-11-28 11:02:37.249819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.590 [2024-11-28 11:02:37.249849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.590 [2024-11-28 11:02:37.249859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.590 [2024-11-28 11:02:37.251192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.158 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.158 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:59.158 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.158 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.158 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.158 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.158 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:59.158 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.158 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.159 [2024-11-28 11:02:37.847019] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f90d3fa4940) succeed. 00:29:59.159 [2024-11-28 11:02:37.856292] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f90d3f60940) succeed. 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.159 null0 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bb777b304ca74e0d8d160ecc0e96ff8e 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.159 [2024-11-28 11:02:37.978800] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.159 11:02:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.418 nvme0n1 00:29:59.418 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.418 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:59.418 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.418 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.418 [ 00:29:59.418 { 00:29:59.418 "name": "nvme0n1", 00:29:59.418 "aliases": [ 00:29:59.418 "bb777b30-4ca7-4e0d-8d16-0ecc0e96ff8e" 00:29:59.418 ], 00:29:59.418 "product_name": "NVMe disk", 00:29:59.418 "block_size": 512, 00:29:59.418 "num_blocks": 2097152, 00:29:59.418 "uuid": "bb777b30-4ca7-4e0d-8d16-0ecc0e96ff8e", 00:29:59.418 "numa_id": 1, 00:29:59.418 "assigned_rate_limits": { 00:29:59.418 "rw_ios_per_sec": 0, 00:29:59.418 "rw_mbytes_per_sec": 0, 00:29:59.418 "r_mbytes_per_sec": 0, 00:29:59.418 "w_mbytes_per_sec": 0 00:29:59.418 }, 00:29:59.418 "claimed": false, 00:29:59.418 "zoned": false, 00:29:59.418 "supported_io_types": { 00:29:59.418 "read": true, 00:29:59.418 "write": true, 00:29:59.418 "unmap": false, 00:29:59.418 "flush": true, 00:29:59.418 "reset": true, 00:29:59.418 "nvme_admin": true, 00:29:59.418 "nvme_io": true, 00:29:59.418 "nvme_io_md": false, 00:29:59.418 "write_zeroes": true, 00:29:59.418 "zcopy": false, 00:29:59.418 "get_zone_info": false, 00:29:59.418 "zone_management": false, 00:29:59.418 "zone_append": false, 00:29:59.418 "compare": true, 00:29:59.418 "compare_and_write": true, 00:29:59.418 "abort": true, 00:29:59.418 "seek_hole": false, 00:29:59.418 "seek_data": false, 00:29:59.418 "copy": true, 00:29:59.418 "nvme_iov_md": false 00:29:59.418 }, 00:29:59.418 "memory_domains": [ 00:29:59.418 { 00:29:59.418 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:29:59.418 "dma_device_type": 0 00:29:59.418 } 00:29:59.418 ], 00:29:59.418 "driver_specific": { 00:29:59.418 "nvme": [ 00:29:59.418 { 00:29:59.418 "trid": { 00:29:59.418 "trtype": "RDMA", 00:29:59.418 "adrfam": "IPv4", 00:29:59.418 "traddr": "192.168.100.8", 00:29:59.418 "trsvcid": "4420", 00:29:59.418 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:59.418 }, 00:29:59.418 "ctrlr_data": { 00:29:59.418 "cntlid": 1, 00:29:59.418 "vendor_id": "0x8086", 00:29:59.418 "model_number": "SPDK bdev Controller", 00:29:59.418 "serial_number": "00000000000000000000", 00:29:59.418 "firmware_revision": "25.01", 00:29:59.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.418 "oacs": { 00:29:59.418 "security": 0, 00:29:59.418 "format": 0, 00:29:59.418 "firmware": 0, 00:29:59.418 "ns_manage": 0 00:29:59.418 }, 00:29:59.418 "multi_ctrlr": true, 00:29:59.418 "ana_reporting": false 00:29:59.418 }, 00:29:59.418 "vs": { 00:29:59.418 "nvme_version": "1.3" 00:29:59.418 }, 00:29:59.418 "ns_data": { 00:29:59.418 "id": 1, 00:29:59.418 "can_share": true 00:29:59.418 } 00:29:59.418 } 00:29:59.418 ], 00:29:59.418 "mp_policy": "active_passive" 00:29:59.418 } 00:29:59.418 } 00:29:59.418 ] 00:29:59.418 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.418 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:59.418 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.418 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.418 [2024-11-28 11:02:38.075809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:59.419 [2024-11-28 11:02:38.110627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:59.419 [2024-11-28 11:02:38.135861] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.419 [ 00:29:59.419 { 00:29:59.419 "name": "nvme0n1", 00:29:59.419 "aliases": [ 00:29:59.419 "bb777b30-4ca7-4e0d-8d16-0ecc0e96ff8e" 00:29:59.419 ], 00:29:59.419 "product_name": "NVMe disk", 00:29:59.419 "block_size": 512, 00:29:59.419 "num_blocks": 2097152, 00:29:59.419 "uuid": "bb777b30-4ca7-4e0d-8d16-0ecc0e96ff8e", 00:29:59.419 "numa_id": 1, 00:29:59.419 "assigned_rate_limits": { 00:29:59.419 "rw_ios_per_sec": 0, 00:29:59.419 "rw_mbytes_per_sec": 0, 00:29:59.419 "r_mbytes_per_sec": 0, 00:29:59.419 "w_mbytes_per_sec": 0 00:29:59.419 }, 00:29:59.419 "claimed": false, 00:29:59.419 "zoned": false, 00:29:59.419 "supported_io_types": { 00:29:59.419 "read": true, 00:29:59.419 "write": true, 00:29:59.419 "unmap": false, 00:29:59.419 "flush": true, 00:29:59.419 "reset": true, 00:29:59.419 "nvme_admin": true, 00:29:59.419 "nvme_io": true, 00:29:59.419 "nvme_io_md": false, 00:29:59.419 "write_zeroes": true, 00:29:59.419 "zcopy": false, 00:29:59.419 "get_zone_info": false, 00:29:59.419 "zone_management": false, 00:29:59.419 "zone_append": false, 00:29:59.419 "compare": true, 00:29:59.419 "compare_and_write": true, 00:29:59.419 "abort": true, 00:29:59.419 "seek_hole": false, 00:29:59.419 "seek_data": false, 00:29:59.419 "copy": true, 00:29:59.419 "nvme_iov_md": false 00:29:59.419 }, 00:29:59.419 "memory_domains": [ 00:29:59.419 { 00:29:59.419 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:29:59.419 "dma_device_type": 0 00:29:59.419 } 00:29:59.419 ], 00:29:59.419 "driver_specific": { 00:29:59.419 "nvme": [ 00:29:59.419 { 00:29:59.419 "trid": { 00:29:59.419 "trtype": "RDMA", 00:29:59.419 "adrfam": "IPv4", 00:29:59.419 "traddr": "192.168.100.8", 00:29:59.419 "trsvcid": "4420", 00:29:59.419 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:59.419 }, 00:29:59.419 "ctrlr_data": { 00:29:59.419 "cntlid": 2, 00:29:59.419 "vendor_id": "0x8086", 00:29:59.419 "model_number": "SPDK bdev Controller", 00:29:59.419 "serial_number": "00000000000000000000", 00:29:59.419 "firmware_revision": "25.01", 00:29:59.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.419 "oacs": { 00:29:59.419 "security": 0, 00:29:59.419 "format": 0, 00:29:59.419 "firmware": 0, 00:29:59.419 "ns_manage": 0 00:29:59.419 }, 00:29:59.419 "multi_ctrlr": true, 00:29:59.419 "ana_reporting": false 00:29:59.419 }, 00:29:59.419 "vs": { 00:29:59.419 "nvme_version": "1.3" 00:29:59.419 }, 00:29:59.419 "ns_data": { 00:29:59.419 "id": 1, 00:29:59.419 "can_share": true 00:29:59.419 } 00:29:59.419 } 00:29:59.419 ], 00:29:59.419 "mp_policy": "active_passive" 00:29:59.419 } 00:29:59.419 } 00:29:59.419 ] 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ym5xMEvGgb 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ym5xMEvGgb 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.ym5xMEvGgb 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.419 [2024-11-28 11:02:38.219125] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.419 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.419 [2024-11-28 11:02:38.235156] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:59.679 nvme0n1 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.679 [ 00:29:59.679 { 00:29:59.679 "name": "nvme0n1", 00:29:59.679 "aliases": [ 00:29:59.679 "bb777b30-4ca7-4e0d-8d16-0ecc0e96ff8e" 00:29:59.679 ], 00:29:59.679 "product_name": "NVMe disk", 00:29:59.679 "block_size": 512, 00:29:59.679 "num_blocks": 2097152, 00:29:59.679 "uuid": "bb777b30-4ca7-4e0d-8d16-0ecc0e96ff8e", 00:29:59.679 "numa_id": 1, 00:29:59.679 "assigned_rate_limits": { 00:29:59.679 "rw_ios_per_sec": 0, 00:29:59.679 "rw_mbytes_per_sec": 0, 00:29:59.679 "r_mbytes_per_sec": 0, 00:29:59.679 "w_mbytes_per_sec": 0 00:29:59.679 }, 00:29:59.679 "claimed": false, 00:29:59.679 "zoned": false, 00:29:59.679 "supported_io_types": { 00:29:59.679 "read": true, 00:29:59.679 "write": true, 00:29:59.679 "unmap": false, 00:29:59.679 "flush": true, 00:29:59.679 "reset": true, 00:29:59.679 "nvme_admin": true, 00:29:59.679 "nvme_io": true, 00:29:59.679 "nvme_io_md": false, 00:29:59.679 "write_zeroes": true, 00:29:59.679 "zcopy": false, 00:29:59.679 "get_zone_info": false, 00:29:59.679 "zone_management": false, 00:29:59.679 "zone_append": false, 00:29:59.679 "compare": true, 00:29:59.679 "compare_and_write": true, 00:29:59.679 "abort": true, 00:29:59.679 "seek_hole": false, 00:29:59.679 "seek_data": false, 00:29:59.679 "copy": true, 00:29:59.679 "nvme_iov_md": false 00:29:59.679 }, 00:29:59.679 "memory_domains": [ 00:29:59.679 { 00:29:59.679 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:29:59.679 "dma_device_type": 0 00:29:59.679 } 00:29:59.679 ], 00:29:59.679 "driver_specific": { 00:29:59.679 "nvme": [ 00:29:59.679 { 00:29:59.679 "trid": { 00:29:59.679 "trtype": "RDMA", 00:29:59.679 "adrfam": "IPv4", 00:29:59.679 "traddr": "192.168.100.8", 00:29:59.679 "trsvcid": "4421", 00:29:59.679 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:59.679 }, 00:29:59.679 "ctrlr_data": { 00:29:59.679 "cntlid": 3, 00:29:59.679 "vendor_id": "0x8086", 00:29:59.679 "model_number": "SPDK bdev Controller", 00:29:59.679 "serial_number": "00000000000000000000", 00:29:59.679 "firmware_revision": "25.01", 00:29:59.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.679 "oacs": { 00:29:59.679 "security": 0, 00:29:59.679 "format": 0, 00:29:59.679 "firmware": 0, 00:29:59.679 "ns_manage": 0 00:29:59.679 }, 00:29:59.679 "multi_ctrlr": true, 00:29:59.679 "ana_reporting": false 00:29:59.679 }, 00:29:59.679 "vs": { 00:29:59.679 "nvme_version": "1.3" 00:29:59.679 }, 00:29:59.679 "ns_data": { 00:29:59.679 "id": 1, 00:29:59.679 "can_share": true 00:29:59.679 } 00:29:59.679 } 00:29:59.679 ], 00:29:59.679 "mp_policy": "active_passive" 00:29:59.679 } 00:29:59.679 } 00:29:59.679 ] 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.ym5xMEvGgb 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:59.679 rmmod nvme_rdma 00:29:59.679 rmmod nvme_fabrics 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1828605 ']' 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1828605 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1828605 ']' 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1828605 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1828605 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1828605' 00:29:59.679 killing process with pid 1828605 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1828605 00:29:59.679 11:02:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1828605 00:30:00.618 11:02:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:00.618 11:02:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:00.618 00:30:00.618 real 0m11.126s 00:30:00.618 user 0m4.920s 00:30:00.618 sys 0m6.945s 00:30:00.618 11:02:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.618 11:02:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.618 ************************************ 00:30:00.618 END TEST nvmf_async_init 00:30:00.618 ************************************ 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.878 ************************************ 00:30:00.878 START TEST dma 00:30:00.878 ************************************ 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:30:00.878 * Looking for test storage... 00:30:00.878 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.878 --rc genhtml_branch_coverage=1 00:30:00.878 --rc genhtml_function_coverage=1 00:30:00.878 --rc genhtml_legend=1 00:30:00.878 --rc geninfo_all_blocks=1 00:30:00.878 --rc geninfo_unexecuted_blocks=1 00:30:00.878 00:30:00.878 ' 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.878 --rc genhtml_branch_coverage=1 00:30:00.878 --rc genhtml_function_coverage=1 00:30:00.878 --rc genhtml_legend=1 00:30:00.878 --rc geninfo_all_blocks=1 00:30:00.878 --rc geninfo_unexecuted_blocks=1 00:30:00.878 00:30:00.878 ' 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.878 --rc genhtml_branch_coverage=1 00:30:00.878 --rc genhtml_function_coverage=1 00:30:00.878 --rc genhtml_legend=1 00:30:00.878 --rc geninfo_all_blocks=1 00:30:00.878 --rc geninfo_unexecuted_blocks=1 00:30:00.878 00:30:00.878 ' 00:30:00.878 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.878 --rc genhtml_branch_coverage=1 00:30:00.878 --rc genhtml_function_coverage=1 00:30:00.878 --rc genhtml_legend=1 00:30:00.878 --rc geninfo_all_blocks=1 00:30:00.878 --rc geninfo_unexecuted_blocks=1 00:30:00.879 00:30:00.879 ' 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:00.879 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:30:00.879 11:02:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:09.011 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:09.011 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:09.011 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:09.011 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:09.011 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:09.012 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:09.012 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:09.012 altname enp217s0f0np0 00:30:09.012 altname ens818f0np0 00:30:09.012 inet 192.168.100.8/24 scope global mlx_0_0 00:30:09.012 valid_lft forever preferred_lft forever 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:09.012 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:09.012 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:09.012 altname enp217s0f1np1 00:30:09.012 altname ens818f1np1 00:30:09.012 inet 192.168.100.9/24 scope global mlx_0_1 00:30:09.012 valid_lft forever preferred_lft forever 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:09.012 192.168.100.9' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:09.012 192.168.100.9' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:09.012 192.168.100.9' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=1833051 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 1833051 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 1833051 ']' 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.012 11:02:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:09.012 [2024-11-28 11:02:47.805617] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:09.012 [2024-11-28 11:02:47.805711] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.273 [2024-11-28 11:02:47.957113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:09.273 [2024-11-28 11:02:48.055507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.273 [2024-11-28 11:02:48.055558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.273 [2024-11-28 11:02:48.055571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.273 [2024-11-28 11:02:48.055583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.273 [2024-11-28 11:02:48.055593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.273 [2024-11-28 11:02:48.057769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.273 [2024-11-28 11:02:48.057778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.841 11:02:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.841 11:02:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:30:09.841 11:02:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:09.841 11:02:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:09.842 11:02:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:09.842 11:02:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.842 11:02:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:09.842 11:02:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.842 11:02:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:09.842 [2024-11-28 11:02:48.678136] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f3531325940) succeed. 00:30:09.842 [2024-11-28 11:02:48.687405] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f35311bd940) succeed. 00:30:10.101 11:02:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.101 11:02:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:30:10.101 11:02:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.101 11:02:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:10.361 Malloc0 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:10.361 [2024-11-28 11:02:49.094195] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.361 { 00:30:10.361 "params": { 00:30:10.361 "name": "Nvme$subsystem", 00:30:10.361 "trtype": "$TEST_TRANSPORT", 00:30:10.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.361 "adrfam": "ipv4", 00:30:10.361 "trsvcid": "$NVMF_PORT", 00:30:10.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.361 "hdgst": ${hdgst:-false}, 00:30:10.361 "ddgst": ${ddgst:-false} 00:30:10.361 }, 00:30:10.361 "method": "bdev_nvme_attach_controller" 00:30:10.361 } 00:30:10.361 EOF 00:30:10.361 )") 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:30:10.361 11:02:49 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:10.361 "params": { 00:30:10.361 "name": "Nvme0", 00:30:10.361 "trtype": "rdma", 00:30:10.361 "traddr": "192.168.100.8", 00:30:10.361 "adrfam": "ipv4", 00:30:10.361 "trsvcid": "4420", 00:30:10.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:10.361 "hdgst": false, 00:30:10.361 "ddgst": false 00:30:10.361 }, 00:30:10.361 "method": "bdev_nvme_attach_controller" 00:30:10.361 }' 00:30:10.361 [2024-11-28 11:02:49.176459] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:10.361 [2024-11-28 11:02:49.176546] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833316 ] 00:30:10.620 [2024-11-28 11:02:49.324825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:10.620 [2024-11-28 11:02:49.429520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:10.620 [2024-11-28 11:02:49.429529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.192 bdev Nvme0n1 reports 1 memory domains 00:30:17.192 bdev Nvme0n1 supports RDMA memory domain 00:30:17.192 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:17.192 ========================================================================== 00:30:17.192 Latency [us] 00:30:17.192 IOPS MiB/s Average min max 00:30:17.192 Core 2: 19340.57 75.55 826.60 273.26 12843.44 00:30:17.192 Core 3: 19251.18 75.20 830.40 283.30 13358.05 00:30:17.192 ========================================================================== 00:30:17.192 Total : 38591.75 150.75 828.49 273.26 13358.05 00:30:17.192 00:30:17.192 Total operations: 192981, translate 192981 pull_push 0 memzero 0 00:30:17.192 11:02:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:30:17.192 11:02:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:30:17.192 11:02:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:30:17.192 [2024-11-28 11:02:55.842534] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:17.192 [2024-11-28 11:02:55.842628] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834405 ] 00:30:17.192 [2024-11-28 11:02:55.991794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:17.451 [2024-11-28 11:02:56.096847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.452 [2024-11-28 11:02:56.096854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:24.024 bdev Malloc0 reports 2 memory domains 00:30:24.024 bdev Malloc0 doesn't support RDMA memory domain 00:30:24.024 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:24.024 ========================================================================== 00:30:24.024 Latency [us] 00:30:24.024 IOPS MiB/s Average min max 00:30:24.024 Core 2: 12368.43 48.31 1292.76 466.51 2019.12 00:30:24.024 Core 3: 12591.13 49.18 1269.86 494.29 2468.71 00:30:24.024 ========================================================================== 00:30:24.024 Total : 24959.56 97.50 1281.21 466.51 2468.71 00:30:24.024 00:30:24.024 Total operations: 124850, translate 0 pull_push 499400 memzero 0 00:30:24.024 11:03:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:30:24.024 11:03:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:30:24.024 11:03:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:30:24.024 11:03:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:30:24.024 Ignoring -M option 00:30:24.024 [2024-11-28 11:03:02.839408] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:24.024 [2024-11-28 11:03:02.839502] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1835600 ] 00:30:24.284 [2024-11-28 11:03:02.991173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:24.284 [2024-11-28 11:03:03.095289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.284 [2024-11-28 11:03:03.095296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.856 bdev 0b424654-ddd1-4fea-9c58-787554b1f489 reports 1 memory domains 00:30:30.856 bdev 0b424654-ddd1-4fea-9c58-787554b1f489 supports RDMA memory domain 00:30:30.856 Initialization complete, running randread IO for 5 sec on 2 cores 00:30:30.856 ========================================================================== 00:30:30.856 Latency [us] 00:30:30.856 IOPS MiB/s Average min max 00:30:30.856 Core 2: 62278.78 243.28 256.01 85.85 2232.94 00:30:30.856 Core 3: 63990.48 249.96 249.17 84.51 2491.27 00:30:30.856 ========================================================================== 00:30:30.856 Total : 126269.25 493.24 252.55 84.51 2491.27 00:30:30.856 00:30:30.856 Total operations: 631456, translate 0 pull_push 0 memzero 631456 00:30:30.856 11:03:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:30:30.856 [2024-11-28 11:03:09.671644] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:33.508 Initializing NVMe Controllers 00:30:33.508 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:30:33.508 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:33.508 Initialization complete. Launching workers. 00:30:33.508 ======================================================== 00:30:33.508 Latency(us) 00:30:33.508 Device Information : IOPS MiB/s Average min max 00:30:33.508 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 1994.95 7.79 7987.81 6981.74 10978.87 00:30:33.508 ======================================================== 00:30:33.508 Total : 1994.95 7.79 7987.81 6981.74 10978.87 00:30:33.508 00:30:33.508 11:03:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:30:33.508 11:03:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:30:33.508 11:03:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:30:33.508 11:03:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:30:33.508 [2024-11-28 11:03:12.145077] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:33.508 [2024-11-28 11:03:12.145165] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837615 ] 00:30:33.508 [2024-11-28 11:03:12.292617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:33.779 [2024-11-28 11:03:12.395872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.779 [2024-11-28 11:03:12.395880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.347 bdev d1ba0f76-5661-4748-81f4-b49de86fcc08 reports 1 memory domains 00:30:40.347 bdev d1ba0f76-5661-4748-81f4-b49de86fcc08 supports RDMA memory domain 00:30:40.347 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:40.347 ========================================================================== 00:30:40.347 Latency [us] 00:30:40.347 IOPS MiB/s Average min max 00:30:40.347 Core 2: 16716.54 65.30 956.42 34.31 8045.47 00:30:40.347 Core 3: 16851.92 65.83 948.72 14.91 8191.66 00:30:40.347 ========================================================================== 00:30:40.347 Total : 33568.46 131.13 952.56 14.91 8191.66 00:30:40.347 00:30:40.347 Total operations: 167867, translate 167757 pull_push 0 memzero 110 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:40.347 rmmod nvme_rdma 00:30:40.347 rmmod nvme_fabrics 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 1833051 ']' 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 1833051 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 1833051 ']' 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 1833051 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1833051 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1833051' 00:30:40.347 killing process with pid 1833051 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 1833051 00:30:40.347 11:03:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 1833051 00:30:42.254 11:03:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:42.254 11:03:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:42.254 00:30:42.254 real 0m41.446s 00:30:42.254 user 1m57.883s 00:30:42.254 sys 0m8.091s 00:30:42.254 11:03:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.254 11:03:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:42.254 ************************************ 00:30:42.254 END TEST dma 00:30:42.254 ************************************ 00:30:42.254 11:03:21 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:30:42.254 11:03:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:42.254 11:03:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.254 11:03:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.254 ************************************ 00:30:42.254 START TEST nvmf_identify 00:30:42.254 ************************************ 00:30:42.254 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:30:42.514 * Looking for test storage... 00:30:42.514 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:42.514 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:42.514 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:42.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.515 --rc genhtml_branch_coverage=1 00:30:42.515 --rc genhtml_function_coverage=1 00:30:42.515 --rc genhtml_legend=1 00:30:42.515 --rc geninfo_all_blocks=1 00:30:42.515 --rc geninfo_unexecuted_blocks=1 00:30:42.515 00:30:42.515 ' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:42.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.515 --rc genhtml_branch_coverage=1 00:30:42.515 --rc genhtml_function_coverage=1 00:30:42.515 --rc genhtml_legend=1 00:30:42.515 --rc geninfo_all_blocks=1 00:30:42.515 --rc geninfo_unexecuted_blocks=1 00:30:42.515 00:30:42.515 ' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:42.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.515 --rc genhtml_branch_coverage=1 00:30:42.515 --rc genhtml_function_coverage=1 00:30:42.515 --rc genhtml_legend=1 00:30:42.515 --rc geninfo_all_blocks=1 00:30:42.515 --rc geninfo_unexecuted_blocks=1 00:30:42.515 00:30:42.515 ' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:42.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.515 --rc genhtml_branch_coverage=1 00:30:42.515 --rc genhtml_function_coverage=1 00:30:42.515 --rc genhtml_legend=1 00:30:42.515 --rc geninfo_all_blocks=1 00:30:42.515 --rc geninfo_unexecuted_blocks=1 00:30:42.515 00:30:42.515 ' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:42.515 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.515 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.516 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.516 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:42.516 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:42.516 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.516 11:03:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:50.641 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:50.641 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:50.641 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:50.641 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:50.641 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:50.642 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:50.642 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:50.642 altname enp217s0f0np0 00:30:50.642 altname ens818f0np0 00:30:50.642 inet 192.168.100.8/24 scope global mlx_0_0 00:30:50.642 valid_lft forever preferred_lft forever 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:50.642 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:50.902 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:50.902 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:50.902 altname enp217s0f1np1 00:30:50.902 altname ens818f1np1 00:30:50.902 inet 192.168.100.9/24 scope global mlx_0_1 00:30:50.902 valid_lft forever preferred_lft forever 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:50.902 192.168.100.9' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:50.902 192.168.100.9' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:50.902 192.168.100.9' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1843109 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1843109 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1843109 ']' 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.902 11:03:29 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:50.902 [2024-11-28 11:03:29.738882] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:50.902 [2024-11-28 11:03:29.738972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.163 [2024-11-28 11:03:29.893979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.163 [2024-11-28 11:03:29.995969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.163 [2024-11-28 11:03:29.996025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.163 [2024-11-28 11:03:29.996037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.163 [2024-11-28 11:03:29.996067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.163 [2024-11-28 11:03:29.996077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.163 [2024-11-28 11:03:29.998637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.163 [2024-11-28 11:03:29.998698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.163 [2024-11-28 11:03:29.998736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.163 [2024-11-28 11:03:29.998743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.731 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.731 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:51.731 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:51.731 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.731 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:51.731 [2024-11-28 11:03:30.599946] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f5e5bd48940) succeed. 00:30:51.731 [2024-11-28 11:03:30.609624] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f5e5bd01940) succeed. 00:30:51.990 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.990 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:51.991 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.991 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:52.250 Malloc0 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.250 11:03:30 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:52.250 [2024-11-28 11:03:31.008511] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.250 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:52.250 [ 00:30:52.250 { 00:30:52.250 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:52.250 "subtype": "Discovery", 00:30:52.250 "listen_addresses": [ 00:30:52.250 { 00:30:52.250 "trtype": "RDMA", 00:30:52.250 "adrfam": "IPv4", 00:30:52.250 "traddr": "192.168.100.8", 00:30:52.250 "trsvcid": "4420" 00:30:52.250 } 00:30:52.250 ], 00:30:52.250 "allow_any_host": true, 00:30:52.250 "hosts": [] 00:30:52.250 }, 00:30:52.250 { 00:30:52.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.250 "subtype": "NVMe", 00:30:52.250 "listen_addresses": [ 00:30:52.250 { 00:30:52.250 "trtype": "RDMA", 00:30:52.250 "adrfam": "IPv4", 00:30:52.251 "traddr": "192.168.100.8", 00:30:52.251 "trsvcid": "4420" 00:30:52.251 } 00:30:52.251 ], 00:30:52.251 "allow_any_host": true, 00:30:52.251 "hosts": [], 00:30:52.251 "serial_number": "SPDK00000000000001", 00:30:52.251 "model_number": "SPDK bdev Controller", 00:30:52.251 "max_namespaces": 32, 00:30:52.251 "min_cntlid": 1, 00:30:52.251 "max_cntlid": 65519, 00:30:52.251 "namespaces": [ 00:30:52.251 { 00:30:52.251 "nsid": 1, 00:30:52.251 "bdev_name": "Malloc0", 00:30:52.251 "name": "Malloc0", 00:30:52.251 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:52.251 "eui64": "ABCDEF0123456789", 00:30:52.251 "uuid": "ba645fc6-813e-491f-9702-487e61e90bc9" 00:30:52.251 } 00:30:52.251 ] 00:30:52.251 } 00:30:52.251 ] 00:30:52.251 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.251 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:52.251 [2024-11-28 11:03:31.088154] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:52.251 [2024-11-28 11:03:31.088226] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1843313 ] 00:30:52.513 [2024-11-28 11:03:31.174893] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:52.513 [2024-11-28 11:03:31.175015] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:30:52.513 [2024-11-28 11:03:31.175038] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:30:52.513 [2024-11-28 11:03:31.175047] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:30:52.513 [2024-11-28 11:03:31.175096] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:52.513 [2024-11-28 11:03:31.186052] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:30:52.513 [2024-11-28 11:03:31.196589] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:52.513 [2024-11-28 11:03:31.196621] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:30:52.513 [2024-11-28 11:03:31.196645] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196661] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196674] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196683] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196696] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196705] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196715] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196724] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196734] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196743] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196753] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196762] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196772] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196783] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196793] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196802] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196812] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196821] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196833] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196842] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196852] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196860] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196870] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196879] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196896] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196905] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196915] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196924] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196935] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196946] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196956] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.196964] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:30:52.513 [2024-11-28 11:03:31.196976] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:52.513 [2024-11-28 11:03:31.196985] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:30:52.513 [2024-11-28 11:03:31.197021] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.197043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x183d00 00:30:52.513 [2024-11-28 11:03:31.201627] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.513 [2024-11-28 11:03:31.201652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:52.513 [2024-11-28 11:03:31.201671] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.201685] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:52.513 [2024-11-28 11:03:31.201703] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:52.513 [2024-11-28 11:03:31.201714] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:52.513 [2024-11-28 11:03:31.201744] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.513 [2024-11-28 11:03:31.201759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.513 [2024-11-28 11:03:31.201797] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.513 [2024-11-28 11:03:31.201807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:30:52.513 [2024-11-28 11:03:31.201820] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:52.514 [2024-11-28 11:03:31.201832] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.201845] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:52.514 [2024-11-28 11:03:31.201857] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.201873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.514 [2024-11-28 11:03:31.201895] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.514 [2024-11-28 11:03:31.201906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:30:52.514 [2024-11-28 11:03:31.201916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:52.514 [2024-11-28 11:03:31.201927] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.201938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:52.514 [2024-11-28 11:03:31.201955] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.201967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.514 [2024-11-28 11:03:31.201992] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.514 [2024-11-28 11:03:31.202001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:52.514 [2024-11-28 11:03:31.202012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:52.514 [2024-11-28 11:03:31.202024] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202039] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.514 [2024-11-28 11:03:31.202076] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.514 [2024-11-28 11:03:31.202085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:52.514 [2024-11-28 11:03:31.202098] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:52.514 [2024-11-28 11:03:31.202108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:52.514 [2024-11-28 11:03:31.202119] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:52.514 [2024-11-28 11:03:31.202241] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:52.514 [2024-11-28 11:03:31.202253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:52.514 [2024-11-28 11:03:31.202269] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.514 [2024-11-28 11:03:31.202314] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.514 [2024-11-28 11:03:31.202323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:52.514 [2024-11-28 11:03:31.202334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:52.514 [2024-11-28 11:03:31.202344] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202358] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.514 [2024-11-28 11:03:31.202397] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.514 [2024-11-28 11:03:31.202406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:52.514 [2024-11-28 11:03:31.202419] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:52.514 [2024-11-28 11:03:31.202428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:52.514 [2024-11-28 11:03:31.202439] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202451] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:52.514 [2024-11-28 11:03:31.202471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:52.514 [2024-11-28 11:03:31.202490] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:30:52.514 [2024-11-28 11:03:31.202559] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.514 [2024-11-28 11:03:31.202570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:52.514 [2024-11-28 11:03:31.202586] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:52.514 [2024-11-28 11:03:31.202598] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:52.514 [2024-11-28 11:03:31.202614] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:52.514 [2024-11-28 11:03:31.202629] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:52.514 [2024-11-28 11:03:31.202638] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:52.514 [2024-11-28 11:03:31.202650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:52.514 [2024-11-28 11:03:31.202659] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:52.514 [2024-11-28 11:03:31.202691] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.514 [2024-11-28 11:03:31.202731] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.514 [2024-11-28 11:03:31.202742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:52.514 [2024-11-28 11:03:31.202754] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.514 [2024-11-28 11:03:31.202779] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.514 [2024-11-28 11:03:31.202801] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.514 [2024-11-28 11:03:31.202822] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.514 [2024-11-28 11:03:31.202842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:52.514 [2024-11-28 11:03:31.202858] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:52.514 [2024-11-28 11:03:31.202890] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.514 [2024-11-28 11:03:31.202928] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.514 [2024-11-28 11:03:31.202937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:30:52.514 [2024-11-28 11:03:31.202948] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:52.514 [2024-11-28 11:03:31.202959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:52.514 [2024-11-28 11:03:31.202972] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.202989] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.203006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:30:52.514 [2024-11-28 11:03:31.203047] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.514 [2024-11-28 11:03:31.203059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:52.514 [2024-11-28 11:03:31.203076] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.203095] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:52.514 [2024-11-28 11:03:31.203141] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.514 [2024-11-28 11:03:31.203158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x400 key:0x183d00 00:30:52.514 [2024-11-28 11:03:31.203169] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183d00 00:30:52.515 [2024-11-28 11:03:31.203187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.515 [2024-11-28 11:03:31.203219] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.515 [2024-11-28 11:03:31.203230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:52.515 [2024-11-28 11:03:31.203255] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x183d00 00:30:52.515 [2024-11-28 11:03:31.203270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183d00 00:30:52.515 [2024-11-28 11:03:31.203282] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x183d00 00:30:52.515 [2024-11-28 11:03:31.203294] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.515 [2024-11-28 11:03:31.203302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:52.515 [2024-11-28 11:03:31.203313] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x183d00 00:30:52.515 [2024-11-28 11:03:31.203322] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.515 [2024-11-28 11:03:31.203332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:52.515 [2024-11-28 11:03:31.203348] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183d00 00:30:52.515 [2024-11-28 11:03:31.203364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183d00 00:30:52.515 [2024-11-28 11:03:31.203374] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x183d00 00:30:52.515 [2024-11-28 11:03:31.203395] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.515 [2024-11-28 11:03:31.203404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:52.515 [2024-11-28 11:03:31.203422] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x183d00 00:30:52.515 ===================================================== 00:30:52.515 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:52.515 ===================================================== 00:30:52.515 Controller Capabilities/Features 00:30:52.515 ================================ 00:30:52.515 Vendor ID: 0000 00:30:52.515 Subsystem Vendor ID: 0000 00:30:52.515 Serial Number: .................... 00:30:52.515 Model Number: ........................................ 00:30:52.515 Firmware Version: 25.01 00:30:52.515 Recommended Arb Burst: 0 00:30:52.515 IEEE OUI Identifier: 00 00 00 00:30:52.515 Multi-path I/O 00:30:52.515 May have multiple subsystem ports: No 00:30:52.515 May have multiple controllers: No 00:30:52.515 Associated with SR-IOV VF: No 00:30:52.515 Max Data Transfer Size: 131072 00:30:52.515 Max Number of Namespaces: 0 00:30:52.515 Max Number of I/O Queues: 1024 00:30:52.515 NVMe Specification Version (VS): 1.3 00:30:52.515 NVMe Specification Version (Identify): 1.3 00:30:52.515 Maximum Queue Entries: 128 00:30:52.515 Contiguous Queues Required: Yes 00:30:52.515 Arbitration Mechanisms Supported 00:30:52.515 Weighted Round Robin: Not Supported 00:30:52.515 Vendor Specific: Not Supported 00:30:52.515 Reset Timeout: 15000 ms 00:30:52.515 Doorbell Stride: 4 bytes 00:30:52.515 NVM Subsystem Reset: Not Supported 00:30:52.515 Command Sets Supported 00:30:52.515 NVM Command Set: Supported 00:30:52.515 Boot Partition: Not Supported 00:30:52.515 Memory Page Size Minimum: 4096 bytes 00:30:52.515 Memory Page Size Maximum: 4096 bytes 00:30:52.515 Persistent Memory Region: Not Supported 00:30:52.515 Optional Asynchronous Events Supported 00:30:52.515 Namespace Attribute Notices: Not Supported 00:30:52.515 Firmware Activation Notices: Not Supported 00:30:52.515 ANA Change Notices: Not Supported 00:30:52.515 PLE Aggregate Log Change Notices: Not Supported 00:30:52.515 LBA Status Info Alert Notices: Not Supported 00:30:52.515 EGE Aggregate Log Change Notices: Not Supported 00:30:52.515 Normal NVM Subsystem Shutdown event: Not Supported 00:30:52.515 Zone Descriptor Change Notices: Not Supported 00:30:52.515 Discovery Log Change Notices: Supported 00:30:52.515 Controller Attributes 00:30:52.515 128-bit Host Identifier: Not Supported 00:30:52.515 Non-Operational Permissive Mode: Not Supported 00:30:52.515 NVM Sets: Not Supported 00:30:52.515 Read Recovery Levels: Not Supported 00:30:52.515 Endurance Groups: Not Supported 00:30:52.515 Predictable Latency Mode: Not Supported 00:30:52.515 Traffic Based Keep ALive: Not Supported 00:30:52.515 Namespace Granularity: Not Supported 00:30:52.515 SQ Associations: Not Supported 00:30:52.515 UUID List: Not Supported 00:30:52.515 Multi-Domain Subsystem: Not Supported 00:30:52.515 Fixed Capacity Management: Not Supported 00:30:52.515 Variable Capacity Management: Not Supported 00:30:52.515 Delete Endurance Group: Not Supported 00:30:52.515 Delete NVM Set: Not Supported 00:30:52.515 Extended LBA Formats Supported: Not Supported 00:30:52.515 Flexible Data Placement Supported: Not Supported 00:30:52.515 00:30:52.515 Controller Memory Buffer Support 00:30:52.515 ================================ 00:30:52.515 Supported: No 00:30:52.515 00:30:52.515 Persistent Memory Region Support 00:30:52.515 ================================ 00:30:52.515 Supported: No 00:30:52.515 00:30:52.515 Admin Command Set Attributes 00:30:52.515 ============================ 00:30:52.515 Security Send/Receive: Not Supported 00:30:52.515 Format NVM: Not Supported 00:30:52.515 Firmware Activate/Download: Not Supported 00:30:52.515 Namespace Management: Not Supported 00:30:52.515 Device Self-Test: Not Supported 00:30:52.515 Directives: Not Supported 00:30:52.515 NVMe-MI: Not Supported 00:30:52.515 Virtualization Management: Not Supported 00:30:52.515 Doorbell Buffer Config: Not Supported 00:30:52.515 Get LBA Status Capability: Not Supported 00:30:52.515 Command & Feature Lockdown Capability: Not Supported 00:30:52.515 Abort Command Limit: 1 00:30:52.515 Async Event Request Limit: 4 00:30:52.515 Number of Firmware Slots: N/A 00:30:52.515 Firmware Slot 1 Read-Only: N/A 00:30:52.515 Firmware Activation Without Reset: N/A 00:30:52.515 Multiple Update Detection Support: N/A 00:30:52.515 Firmware Update Granularity: No Information Provided 00:30:52.515 Per-Namespace SMART Log: No 00:30:52.515 Asymmetric Namespace Access Log Page: Not Supported 00:30:52.515 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:52.515 Command Effects Log Page: Not Supported 00:30:52.515 Get Log Page Extended Data: Supported 00:30:52.515 Telemetry Log Pages: Not Supported 00:30:52.515 Persistent Event Log Pages: Not Supported 00:30:52.515 Supported Log Pages Log Page: May Support 00:30:52.515 Commands Supported & Effects Log Page: Not Supported 00:30:52.515 Feature Identifiers & Effects Log Page:May Support 00:30:52.515 NVMe-MI Commands & Effects Log Page: May Support 00:30:52.515 Data Area 4 for Telemetry Log: Not Supported 00:30:52.515 Error Log Page Entries Supported: 128 00:30:52.515 Keep Alive: Not Supported 00:30:52.515 00:30:52.515 NVM Command Set Attributes 00:30:52.515 ========================== 00:30:52.515 Submission Queue Entry Size 00:30:52.515 Max: 1 00:30:52.515 Min: 1 00:30:52.515 Completion Queue Entry Size 00:30:52.515 Max: 1 00:30:52.515 Min: 1 00:30:52.515 Number of Namespaces: 0 00:30:52.515 Compare Command: Not Supported 00:30:52.515 Write Uncorrectable Command: Not Supported 00:30:52.515 Dataset Management Command: Not Supported 00:30:52.515 Write Zeroes Command: Not Supported 00:30:52.515 Set Features Save Field: Not Supported 00:30:52.515 Reservations: Not Supported 00:30:52.515 Timestamp: Not Supported 00:30:52.515 Copy: Not Supported 00:30:52.515 Volatile Write Cache: Not Present 00:30:52.515 Atomic Write Unit (Normal): 1 00:30:52.515 Atomic Write Unit (PFail): 1 00:30:52.515 Atomic Compare & Write Unit: 1 00:30:52.515 Fused Compare & Write: Supported 00:30:52.515 Scatter-Gather List 00:30:52.515 SGL Command Set: Supported 00:30:52.515 SGL Keyed: Supported 00:30:52.515 SGL Bit Bucket Descriptor: Not Supported 00:30:52.515 SGL Metadata Pointer: Not Supported 00:30:52.515 Oversized SGL: Not Supported 00:30:52.515 SGL Metadata Address: Not Supported 00:30:52.515 SGL Offset: Supported 00:30:52.515 Transport SGL Data Block: Not Supported 00:30:52.515 Replay Protected Memory Block: Not Supported 00:30:52.515 00:30:52.515 Firmware Slot Information 00:30:52.515 ========================= 00:30:52.515 Active slot: 0 00:30:52.515 00:30:52.515 00:30:52.515 Error Log 00:30:52.515 ========= 00:30:52.515 00:30:52.515 Active Namespaces 00:30:52.515 ================= 00:30:52.515 Discovery Log Page 00:30:52.515 ================== 00:30:52.515 Generation Counter: 2 00:30:52.515 Number of Records: 2 00:30:52.515 Record Format: 0 00:30:52.515 00:30:52.515 Discovery Log Entry 0 00:30:52.515 ---------------------- 00:30:52.515 Transport Type: 1 (RDMA) 00:30:52.515 Address Family: 1 (IPv4) 00:30:52.515 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:52.515 Entry Flags: 00:30:52.516 Duplicate Returned Information: 1 00:30:52.516 Explicit Persistent Connection Support for Discovery: 1 00:30:52.516 Transport Requirements: 00:30:52.516 Secure Channel: Not Required 00:30:52.516 Port ID: 0 (0x0000) 00:30:52.516 Controller ID: 65535 (0xffff) 00:30:52.516 Admin Max SQ Size: 128 00:30:52.516 Transport Service Identifier: 4420 00:30:52.516 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:52.516 Transport Address: 192.168.100.8 00:30:52.516 Transport Specific Address Subtype - RDMA 00:30:52.516 RDMA QP Service Type: 1 (Reliable Connected) 00:30:52.516 RDMA Provider Type: 1 (No provider specified) 00:30:52.516 RDMA CM Service: 1 (RDMA_CM) 00:30:52.516 Discovery Log Entry 1 00:30:52.516 ---------------------- 00:30:52.516 Transport Type: 1 (RDMA) 00:30:52.516 Address Family: 1 (IPv4) 00:30:52.516 Subsystem Type: 2 (NVM Subsystem) 00:30:52.516 Entry Flags: 00:30:52.516 Duplicate Returned Information: 0 00:30:52.516 Explicit Persistent Connection Support for Discovery: 0 00:30:52.516 Transport Requirements: 00:30:52.516 Secure Channel: Not Required 00:30:52.516 Port ID: 0 (0x0000) 00:30:52.516 Controller ID: 65535 (0xffff) 00:30:52.516 Admin Max SQ Size: [2024-11-28 11:03:31.203538] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:52.516 [2024-11-28 11:03:31.203560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.203571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.203585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.203600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.203625] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.203637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.203678] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.203688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.203703] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.203718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.203733] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.203750] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.203760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.203770] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:52.516 [2024-11-28 11:03:31.203785] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:52.516 [2024-11-28 11:03:31.203795] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.203809] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.203821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.203846] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.203854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.203866] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.203878] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.203894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.203912] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.203922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.203931] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.203946] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.203959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.203980] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.203989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.204002] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204014] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.204044] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.204055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.204064] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204082] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.204122] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.204131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.204142] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204157] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.204189] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.204200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.204209] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204225] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.204259] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.204268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.204279] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204291] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.204327] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.204341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.204365] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204379] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.204415] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.204424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.204434] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204446] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.204477] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.204487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.204496] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204513] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.204552] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.516 [2024-11-28 11:03:31.204561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:30:52.516 [2024-11-28 11:03:31.204572] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204584] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.516 [2024-11-28 11:03:31.204597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.516 [2024-11-28 11:03:31.204630] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.204641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.204650] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.204664] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.204675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.204707] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.204716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.204727] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.204741] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.204756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.204777] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.204788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.204797] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.204811] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.204822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.204849] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.204858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.204869] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.204880] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.204893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.204912] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.204927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.204936] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.204950] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.204961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.204985] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.204993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.205004] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205016] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.205043] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.205054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.205063] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205077] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.205111] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.205120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.205133] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205146] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.205178] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.205189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.205198] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205212] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.205245] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.205254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.205265] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205279] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.205320] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.205330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.205339] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205358] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.205390] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.205398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.205409] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205425] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.205458] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.205471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.205480] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205497] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.205532] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.205540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.205553] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205564] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.205584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.205595] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.209617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.209637] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.209658] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.209675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.517 [2024-11-28 11:03:31.209705] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.517 [2024-11-28 11:03:31.209715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:30:52.517 [2024-11-28 11:03:31.209729] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x183d00 00:30:52.517 [2024-11-28 11:03:31.209739] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:30:52.517 128 00:30:52.517 Transport Service Identifier: 4420 00:30:52.517 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:52.518 Transport Address: 192.168.100.8 00:30:52.518 Transport Specific Address Subtype - RDMA 00:30:52.518 RDMA QP Service Type: 1 (Reliable Connected) 00:30:52.518 RDMA Provider Type: 1 (No provider specified) 00:30:52.518 RDMA CM Service: 1 (RDMA_CM) 00:30:52.518 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:52.518 [2024-11-28 11:03:31.371788] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:52.518 [2024-11-28 11:03:31.371859] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1843398 ] 00:30:52.780 [2024-11-28 11:03:31.454207] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:52.780 [2024-11-28 11:03:31.454320] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:30:52.780 [2024-11-28 11:03:31.454342] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:30:52.780 [2024-11-28 11:03:31.454351] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:30:52.780 [2024-11-28 11:03:31.454393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:52.780 [2024-11-28 11:03:31.473142] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:30:52.780 [2024-11-28 11:03:31.487668] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:52.780 [2024-11-28 11:03:31.487690] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:30:52.780 [2024-11-28 11:03:31.487712] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487726] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487738] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487746] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487758] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487767] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487776] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487784] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487794] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487802] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487812] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487820] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487830] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487840] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487849] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487857] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487867] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487875] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487887] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487895] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487904] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487912] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487922] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487930] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487946] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487954] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487964] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487972] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487983] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.487993] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.488003] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.488010] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:30:52.780 [2024-11-28 11:03:31.488022] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:52.780 [2024-11-28 11:03:31.488029] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:30:52.780 [2024-11-28 11:03:31.488060] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.488078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x183d00 00:30:52.780 [2024-11-28 11:03:31.492624] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.780 [2024-11-28 11:03:31.492654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:52.780 [2024-11-28 11:03:31.492672] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.492688] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:52.780 [2024-11-28 11:03:31.492705] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:52.780 [2024-11-28 11:03:31.492715] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:52.780 [2024-11-28 11:03:31.492742] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.492756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.780 [2024-11-28 11:03:31.492788] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.780 [2024-11-28 11:03:31.492798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:30:52.780 [2024-11-28 11:03:31.492810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:52.780 [2024-11-28 11:03:31.492822] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.492834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:52.780 [2024-11-28 11:03:31.492846] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.492861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.780 [2024-11-28 11:03:31.492887] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.780 [2024-11-28 11:03:31.492898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:30:52.780 [2024-11-28 11:03:31.492907] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:52.780 [2024-11-28 11:03:31.492918] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.492929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:52.780 [2024-11-28 11:03:31.492943] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.780 [2024-11-28 11:03:31.492954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.781 [2024-11-28 11:03:31.492974] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.781 [2024-11-28 11:03:31.492983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:52.781 [2024-11-28 11:03:31.492994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:52.781 [2024-11-28 11:03:31.493007] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493021] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.781 [2024-11-28 11:03:31.493055] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.781 [2024-11-28 11:03:31.493064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:52.781 [2024-11-28 11:03:31.493078] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:52.781 [2024-11-28 11:03:31.493087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:52.781 [2024-11-28 11:03:31.493099] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:52.781 [2024-11-28 11:03:31.493221] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:52.781 [2024-11-28 11:03:31.493232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:52.781 [2024-11-28 11:03:31.493247] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.781 [2024-11-28 11:03:31.493283] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.781 [2024-11-28 11:03:31.493292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:52.781 [2024-11-28 11:03:31.493304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:52.781 [2024-11-28 11:03:31.493316] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493329] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.781 [2024-11-28 11:03:31.493366] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.781 [2024-11-28 11:03:31.493374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:52.781 [2024-11-28 11:03:31.493386] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:52.781 [2024-11-28 11:03:31.493395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.493406] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493418] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:52.781 [2024-11-28 11:03:31.493436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.493454] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:30:52.781 [2024-11-28 11:03:31.493523] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.781 [2024-11-28 11:03:31.493534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:52.781 [2024-11-28 11:03:31.493551] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:52.781 [2024-11-28 11:03:31.493565] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:52.781 [2024-11-28 11:03:31.493573] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:52.781 [2024-11-28 11:03:31.493585] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:52.781 [2024-11-28 11:03:31.493594] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:52.781 [2024-11-28 11:03:31.493604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.493619] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.493651] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.781 [2024-11-28 11:03:31.493687] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.781 [2024-11-28 11:03:31.493698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:52.781 [2024-11-28 11:03:31.493709] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.781 [2024-11-28 11:03:31.493736] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.781 [2024-11-28 11:03:31.493757] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.781 [2024-11-28 11:03:31.493778] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.781 [2024-11-28 11:03:31.493797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.493812] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.493839] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.781 [2024-11-28 11:03:31.493877] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.781 [2024-11-28 11:03:31.493885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:30:52.781 [2024-11-28 11:03:31.493897] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:52.781 [2024-11-28 11:03:31.493906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.493916] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.493942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.493952] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.493967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.781 [2024-11-28 11:03:31.493984] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.781 [2024-11-28 11:03:31.493994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:30:52.781 [2024-11-28 11:03:31.494069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.494080] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.494094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.494117] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.494129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183d00 00:30:52.781 [2024-11-28 11:03:31.494165] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.781 [2024-11-28 11:03:31.494173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:52.781 [2024-11-28 11:03:31.494204] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:52.781 [2024-11-28 11:03:31.494222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.494233] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.494245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:52.781 [2024-11-28 11:03:31.494263] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.781 [2024-11-28 11:03:31.494275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:30:52.781 [2024-11-28 11:03:31.494351] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.781 [2024-11-28 11:03:31.494360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.494387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:52.782 [2024-11-28 11:03:31.494399] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:52.782 [2024-11-28 11:03:31.494427] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:30:52.782 [2024-11-28 11:03:31.494470] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.782 [2024-11-28 11:03:31.494480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.494500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:52.782 [2024-11-28 11:03:31.494511] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:52.782 [2024-11-28 11:03:31.494543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:52.782 [2024-11-28 11:03:31.494555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:52.782 [2024-11-28 11:03:31.494566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:52.782 [2024-11-28 11:03:31.494575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:52.782 [2024-11-28 11:03:31.494587] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:52.782 [2024-11-28 11:03:31.494598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:52.782 [2024-11-28 11:03:31.494614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:52.782 [2024-11-28 11:03:31.494646] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.782 [2024-11-28 11:03:31.494671] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.782 [2024-11-28 11:03:31.494699] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.782 [2024-11-28 11:03:31.494710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.494725] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494736] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.782 [2024-11-28 11:03:31.494744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.494754] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494768] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.782 [2024-11-28 11:03:31.494797] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.782 [2024-11-28 11:03:31.494807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.494816] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494830] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.782 [2024-11-28 11:03:31.494872] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.782 [2024-11-28 11:03:31.494881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.494892] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494905] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.782 [2024-11-28 11:03:31.494936] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.782 [2024-11-28 11:03:31.494946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.494955] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494982] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.494998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183d00 00:30:52.782 [2024-11-28 11:03:31.495015] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.495026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183d00 00:30:52.782 [2024-11-28 11:03:31.495043] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.495054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c8000 len:0x200 key:0x183d00 00:30:52.782 [2024-11-28 11:03:31.495071] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.495082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c6000 len:0x1000 key:0x183d00 00:30:52.782 [2024-11-28 11:03:31.495098] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.782 [2024-11-28 11:03:31.495107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.495144] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.495153] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.782 [2024-11-28 11:03:31.495164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.495177] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.495190] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.782 [2024-11-28 11:03:31.495197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.495210] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x183d00 00:30:52.782 [2024-11-28 11:03:31.495219] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.782 [2024-11-28 11:03:31.495228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:52.782 [2024-11-28 11:03:31.495246] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x183d00 00:30:52.782 ===================================================== 00:30:52.782 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:52.782 ===================================================== 00:30:52.782 Controller Capabilities/Features 00:30:52.782 ================================ 00:30:52.782 Vendor ID: 8086 00:30:52.782 Subsystem Vendor ID: 8086 00:30:52.782 Serial Number: SPDK00000000000001 00:30:52.782 Model Number: SPDK bdev Controller 00:30:52.782 Firmware Version: 25.01 00:30:52.782 Recommended Arb Burst: 6 00:30:52.782 IEEE OUI Identifier: e4 d2 5c 00:30:52.782 Multi-path I/O 00:30:52.782 May have multiple subsystem ports: Yes 00:30:52.782 May have multiple controllers: Yes 00:30:52.782 Associated with SR-IOV VF: No 00:30:52.782 Max Data Transfer Size: 131072 00:30:52.782 Max Number of Namespaces: 32 00:30:52.782 Max Number of I/O Queues: 127 00:30:52.782 NVMe Specification Version (VS): 1.3 00:30:52.782 NVMe Specification Version (Identify): 1.3 00:30:52.782 Maximum Queue Entries: 128 00:30:52.782 Contiguous Queues Required: Yes 00:30:52.782 Arbitration Mechanisms Supported 00:30:52.782 Weighted Round Robin: Not Supported 00:30:52.782 Vendor Specific: Not Supported 00:30:52.782 Reset Timeout: 15000 ms 00:30:52.782 Doorbell Stride: 4 bytes 00:30:52.782 NVM Subsystem Reset: Not Supported 00:30:52.782 Command Sets Supported 00:30:52.782 NVM Command Set: Supported 00:30:52.782 Boot Partition: Not Supported 00:30:52.782 Memory Page Size Minimum: 4096 bytes 00:30:52.782 Memory Page Size Maximum: 4096 bytes 00:30:52.782 Persistent Memory Region: Not Supported 00:30:52.782 Optional Asynchronous Events Supported 00:30:52.782 Namespace Attribute Notices: Supported 00:30:52.782 Firmware Activation Notices: Not Supported 00:30:52.782 ANA Change Notices: Not Supported 00:30:52.782 PLE Aggregate Log Change Notices: Not Supported 00:30:52.782 LBA Status Info Alert Notices: Not Supported 00:30:52.782 EGE Aggregate Log Change Notices: Not Supported 00:30:52.782 Normal NVM Subsystem Shutdown event: Not Supported 00:30:52.782 Zone Descriptor Change Notices: Not Supported 00:30:52.782 Discovery Log Change Notices: Not Supported 00:30:52.782 Controller Attributes 00:30:52.782 128-bit Host Identifier: Supported 00:30:52.782 Non-Operational Permissive Mode: Not Supported 00:30:52.782 NVM Sets: Not Supported 00:30:52.782 Read Recovery Levels: Not Supported 00:30:52.782 Endurance Groups: Not Supported 00:30:52.782 Predictable Latency Mode: Not Supported 00:30:52.782 Traffic Based Keep ALive: Not Supported 00:30:52.783 Namespace Granularity: Not Supported 00:30:52.783 SQ Associations: Not Supported 00:30:52.783 UUID List: Not Supported 00:30:52.783 Multi-Domain Subsystem: Not Supported 00:30:52.783 Fixed Capacity Management: Not Supported 00:30:52.783 Variable Capacity Management: Not Supported 00:30:52.783 Delete Endurance Group: Not Supported 00:30:52.783 Delete NVM Set: Not Supported 00:30:52.783 Extended LBA Formats Supported: Not Supported 00:30:52.783 Flexible Data Placement Supported: Not Supported 00:30:52.783 00:30:52.783 Controller Memory Buffer Support 00:30:52.783 ================================ 00:30:52.783 Supported: No 00:30:52.783 00:30:52.783 Persistent Memory Region Support 00:30:52.783 ================================ 00:30:52.783 Supported: No 00:30:52.783 00:30:52.783 Admin Command Set Attributes 00:30:52.783 ============================ 00:30:52.783 Security Send/Receive: Not Supported 00:30:52.783 Format NVM: Not Supported 00:30:52.783 Firmware Activate/Download: Not Supported 00:30:52.783 Namespace Management: Not Supported 00:30:52.783 Device Self-Test: Not Supported 00:30:52.783 Directives: Not Supported 00:30:52.783 NVMe-MI: Not Supported 00:30:52.783 Virtualization Management: Not Supported 00:30:52.783 Doorbell Buffer Config: Not Supported 00:30:52.783 Get LBA Status Capability: Not Supported 00:30:52.783 Command & Feature Lockdown Capability: Not Supported 00:30:52.783 Abort Command Limit: 4 00:30:52.783 Async Event Request Limit: 4 00:30:52.783 Number of Firmware Slots: N/A 00:30:52.783 Firmware Slot 1 Read-Only: N/A 00:30:52.783 Firmware Activation Without Reset: N/A 00:30:52.783 Multiple Update Detection Support: N/A 00:30:52.783 Firmware Update Granularity: No Information Provided 00:30:52.783 Per-Namespace SMART Log: No 00:30:52.783 Asymmetric Namespace Access Log Page: Not Supported 00:30:52.783 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:52.783 Command Effects Log Page: Supported 00:30:52.783 Get Log Page Extended Data: Supported 00:30:52.783 Telemetry Log Pages: Not Supported 00:30:52.783 Persistent Event Log Pages: Not Supported 00:30:52.783 Supported Log Pages Log Page: May Support 00:30:52.783 Commands Supported & Effects Log Page: Not Supported 00:30:52.783 Feature Identifiers & Effects Log Page:May Support 00:30:52.783 NVMe-MI Commands & Effects Log Page: May Support 00:30:52.783 Data Area 4 for Telemetry Log: Not Supported 00:30:52.783 Error Log Page Entries Supported: 128 00:30:52.783 Keep Alive: Supported 00:30:52.783 Keep Alive Granularity: 10000 ms 00:30:52.783 00:30:52.783 NVM Command Set Attributes 00:30:52.783 ========================== 00:30:52.783 Submission Queue Entry Size 00:30:52.783 Max: 64 00:30:52.783 Min: 64 00:30:52.783 Completion Queue Entry Size 00:30:52.783 Max: 16 00:30:52.783 Min: 16 00:30:52.783 Number of Namespaces: 32 00:30:52.783 Compare Command: Supported 00:30:52.783 Write Uncorrectable Command: Not Supported 00:30:52.783 Dataset Management Command: Supported 00:30:52.783 Write Zeroes Command: Supported 00:30:52.783 Set Features Save Field: Not Supported 00:30:52.783 Reservations: Supported 00:30:52.783 Timestamp: Not Supported 00:30:52.783 Copy: Supported 00:30:52.783 Volatile Write Cache: Present 00:30:52.783 Atomic Write Unit (Normal): 1 00:30:52.783 Atomic Write Unit (PFail): 1 00:30:52.783 Atomic Compare & Write Unit: 1 00:30:52.783 Fused Compare & Write: Supported 00:30:52.783 Scatter-Gather List 00:30:52.783 SGL Command Set: Supported 00:30:52.783 SGL Keyed: Supported 00:30:52.783 SGL Bit Bucket Descriptor: Not Supported 00:30:52.783 SGL Metadata Pointer: Not Supported 00:30:52.783 Oversized SGL: Not Supported 00:30:52.783 SGL Metadata Address: Not Supported 00:30:52.783 SGL Offset: Supported 00:30:52.783 Transport SGL Data Block: Not Supported 00:30:52.783 Replay Protected Memory Block: Not Supported 00:30:52.783 00:30:52.783 Firmware Slot Information 00:30:52.783 ========================= 00:30:52.783 Active slot: 1 00:30:52.783 Slot 1 Firmware Revision: 25.01 00:30:52.783 00:30:52.783 00:30:52.783 Commands Supported and Effects 00:30:52.783 ============================== 00:30:52.783 Admin Commands 00:30:52.783 -------------- 00:30:52.783 Get Log Page (02h): Supported 00:30:52.783 Identify (06h): Supported 00:30:52.783 Abort (08h): Supported 00:30:52.783 Set Features (09h): Supported 00:30:52.783 Get Features (0Ah): Supported 00:30:52.783 Asynchronous Event Request (0Ch): Supported 00:30:52.783 Keep Alive (18h): Supported 00:30:52.783 I/O Commands 00:30:52.783 ------------ 00:30:52.783 Flush (00h): Supported LBA-Change 00:30:52.783 Write (01h): Supported LBA-Change 00:30:52.783 Read (02h): Supported 00:30:52.783 Compare (05h): Supported 00:30:52.783 Write Zeroes (08h): Supported LBA-Change 00:30:52.783 Dataset Management (09h): Supported LBA-Change 00:30:52.783 Copy (19h): Supported LBA-Change 00:30:52.783 00:30:52.783 Error Log 00:30:52.783 ========= 00:30:52.783 00:30:52.783 Arbitration 00:30:52.783 =========== 00:30:52.783 Arbitration Burst: 1 00:30:52.783 00:30:52.783 Power Management 00:30:52.783 ================ 00:30:52.783 Number of Power States: 1 00:30:52.783 Current Power State: Power State #0 00:30:52.783 Power State #0: 00:30:52.783 Max Power: 0.00 W 00:30:52.783 Non-Operational State: Operational 00:30:52.783 Entry Latency: Not Reported 00:30:52.783 Exit Latency: Not Reported 00:30:52.783 Relative Read Throughput: 0 00:30:52.783 Relative Read Latency: 0 00:30:52.783 Relative Write Throughput: 0 00:30:52.783 Relative Write Latency: 0 00:30:52.783 Idle Power: Not Reported 00:30:52.783 Active Power: Not Reported 00:30:52.783 Non-Operational Permissive Mode: Not Supported 00:30:52.783 00:30:52.783 Health Information 00:30:52.783 ================== 00:30:52.783 Critical Warnings: 00:30:52.783 Available Spare Space: OK 00:30:52.783 Temperature: OK 00:30:52.783 Device Reliability: OK 00:30:52.783 Read Only: No 00:30:52.783 Volatile Memory Backup: OK 00:30:52.783 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:52.783 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:52.783 Available Spare: 0% 00:30:52.783 Available Spare Threshold: 0% 00:30:52.783 Life Percentage [2024-11-28 11:03:31.495374] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183d00 00:30:52.783 [2024-11-28 11:03:31.495389] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.783 [2024-11-28 11:03:31.495419] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.783 [2024-11-28 11:03:31.495428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:52.783 [2024-11-28 11:03:31.495440] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x183d00 00:30:52.783 [2024-11-28 11:03:31.495483] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:52.783 [2024-11-28 11:03:31.495507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.783 [2024-11-28 11:03:31.495518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.783 [2024-11-28 11:03:31.495530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.783 [2024-11-28 11:03:31.495540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.783 [2024-11-28 11:03:31.495554] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x183d00 00:30:52.783 [2024-11-28 11:03:31.495566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.783 [2024-11-28 11:03:31.495589] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.783 [2024-11-28 11:03:31.495598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:30:52.783 [2024-11-28 11:03:31.495632] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.783 [2024-11-28 11:03:31.495645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.783 [2024-11-28 11:03:31.495657] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183d00 00:30:52.783 [2024-11-28 11:03:31.495676] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.783 [2024-11-28 11:03:31.495687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:52.783 [2024-11-28 11:03:31.495696] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:52.783 [2024-11-28 11:03:31.495709] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:52.783 [2024-11-28 11:03:31.495720] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183d00 00:30:52.783 [2024-11-28 11:03:31.495736] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.783 [2024-11-28 11:03:31.495752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.783 [2024-11-28 11:03:31.495776] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.495784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.495795] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.495807] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.495821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.495839] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.495849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.495857] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.495871] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.495884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.495906] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.495914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.495924] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.495936] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.495951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.495965] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.495976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.495984] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.495998] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.496037] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.496045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.496055] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496073] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.496110] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.496120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.496130] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496144] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.496179] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.496188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.496198] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496209] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.496236] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.496251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.496259] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496275] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.496310] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.496318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.496329] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496343] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.496372] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.496384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.496392] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496406] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.496440] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.496454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.496464] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496478] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.496509] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.496521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.496529] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496543] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.496554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.496575] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.496584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.496597] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.500619] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.500645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:52.784 [2024-11-28 11:03:31.500668] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:52.784 [2024-11-28 11:03:31.500679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:30:52.784 [2024-11-28 11:03:31.500689] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x183d00 00:30:52.784 [2024-11-28 11:03:31.500702] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:30:52.784 Used: 0% 00:30:52.784 Data Units Read: 0 00:30:52.784 Data Units Written: 0 00:30:52.784 Host Read Commands: 0 00:30:52.784 Host Write Commands: 0 00:30:52.784 Controller Busy Time: 0 minutes 00:30:52.784 Power Cycles: 0 00:30:52.784 Power On Hours: 0 hours 00:30:52.784 Unsafe Shutdowns: 0 00:30:52.784 Unrecoverable Media Errors: 0 00:30:52.784 Lifetime Error Log Entries: 0 00:30:52.784 Warning Temperature Time: 0 minutes 00:30:52.784 Critical Temperature Time: 0 minutes 00:30:52.784 00:30:52.784 Number of Queues 00:30:52.784 ================ 00:30:52.784 Number of I/O Submission Queues: 127 00:30:52.784 Number of I/O Completion Queues: 127 00:30:52.784 00:30:52.784 Active Namespaces 00:30:52.784 ================= 00:30:52.784 Namespace ID:1 00:30:52.784 Error Recovery Timeout: Unlimited 00:30:52.784 Command Set Identifier: NVM (00h) 00:30:52.784 Deallocate: Supported 00:30:52.785 Deallocated/Unwritten Error: Not Supported 00:30:52.785 Deallocated Read Value: Unknown 00:30:52.785 Deallocate in Write Zeroes: Not Supported 00:30:52.785 Deallocated Guard Field: 0xFFFF 00:30:52.785 Flush: Supported 00:30:52.785 Reservation: Supported 00:30:52.785 Namespace Sharing Capabilities: Multiple Controllers 00:30:52.785 Size (in LBAs): 131072 (0GiB) 00:30:52.785 Capacity (in LBAs): 131072 (0GiB) 00:30:52.785 Utilization (in LBAs): 131072 (0GiB) 00:30:52.785 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:52.785 EUI64: ABCDEF0123456789 00:30:52.785 UUID: ba645fc6-813e-491f-9702-487e61e90bc9 00:30:52.785 Thin Provisioning: Not Supported 00:30:52.785 Per-NS Atomic Units: Yes 00:30:52.785 Atomic Boundary Size (Normal): 0 00:30:52.785 Atomic Boundary Size (PFail): 0 00:30:52.785 Atomic Boundary Offset: 0 00:30:52.785 Maximum Single Source Range Length: 65535 00:30:52.785 Maximum Copy Length: 65535 00:30:52.785 Maximum Source Range Count: 1 00:30:52.785 NGUID/EUI64 Never Reused: No 00:30:52.785 Namespace Write Protected: No 00:30:52.785 Number of LBA Formats: 1 00:30:52.785 Current LBA Format: LBA Format #00 00:30:52.785 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:52.785 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:52.785 rmmod nvme_rdma 00:30:52.785 rmmod nvme_fabrics 00:30:52.785 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1843109 ']' 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1843109 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1843109 ']' 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1843109 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1843109 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1843109' 00:30:53.044 killing process with pid 1843109 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1843109 00:30:53.044 11:03:31 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1843109 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:54.952 00:30:54.952 real 0m12.494s 00:30:54.952 user 0m14.898s 00:30:54.952 sys 0m7.161s 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:54.952 ************************************ 00:30:54.952 END TEST nvmf_identify 00:30:54.952 ************************************ 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.952 ************************************ 00:30:54.952 START TEST nvmf_perf 00:30:54.952 ************************************ 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:30:54.952 * Looking for test storage... 00:30:54.952 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:54.952 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:54.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.953 --rc genhtml_branch_coverage=1 00:30:54.953 --rc genhtml_function_coverage=1 00:30:54.953 --rc genhtml_legend=1 00:30:54.953 --rc geninfo_all_blocks=1 00:30:54.953 --rc geninfo_unexecuted_blocks=1 00:30:54.953 00:30:54.953 ' 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:54.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.953 --rc genhtml_branch_coverage=1 00:30:54.953 --rc genhtml_function_coverage=1 00:30:54.953 --rc genhtml_legend=1 00:30:54.953 --rc geninfo_all_blocks=1 00:30:54.953 --rc geninfo_unexecuted_blocks=1 00:30:54.953 00:30:54.953 ' 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:54.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.953 --rc genhtml_branch_coverage=1 00:30:54.953 --rc genhtml_function_coverage=1 00:30:54.953 --rc genhtml_legend=1 00:30:54.953 --rc geninfo_all_blocks=1 00:30:54.953 --rc geninfo_unexecuted_blocks=1 00:30:54.953 00:30:54.953 ' 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:54.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.953 --rc genhtml_branch_coverage=1 00:30:54.953 --rc genhtml_function_coverage=1 00:30:54.953 --rc genhtml_legend=1 00:30:54.953 --rc geninfo_all_blocks=1 00:30:54.953 --rc geninfo_unexecuted_blocks=1 00:30:54.953 00:30:54.953 ' 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.953 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:55.213 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.213 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:55.214 11:03:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:03.340 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:03.340 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:03.340 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:03.340 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:03.340 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:03.341 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:03.341 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:03.341 altname enp217s0f0np0 00:31:03.341 altname ens818f0np0 00:31:03.341 inet 192.168.100.8/24 scope global mlx_0_0 00:31:03.341 valid_lft forever preferred_lft forever 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:03.341 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:03.341 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:03.341 altname enp217s0f1np1 00:31:03.341 altname ens818f1np1 00:31:03.341 inet 192.168.100.9/24 scope global mlx_0_1 00:31:03.341 valid_lft forever preferred_lft forever 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:03.341 192.168.100.9' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:03.341 192.168.100.9' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:03.341 192.168.100.9' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1847579 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1847579 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1847579 ']' 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:03.341 11:03:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:03.341 [2024-11-28 11:03:41.647541] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:03.341 [2024-11-28 11:03:41.647641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.341 [2024-11-28 11:03:41.801170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:03.341 [2024-11-28 11:03:41.903628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.341 [2024-11-28 11:03:41.903670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.341 [2024-11-28 11:03:41.903683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.341 [2024-11-28 11:03:41.903697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.341 [2024-11-28 11:03:41.903706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.341 [2024-11-28 11:03:41.906316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.341 [2024-11-28 11:03:41.906400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:03.341 [2024-11-28 11:03:41.906423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.341 [2024-11-28 11:03:41.906432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:03.601 11:03:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:03.601 11:03:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:31:03.601 11:03:42 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:03.601 11:03:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:03.601 11:03:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:03.861 11:03:42 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.861 11:03:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:03.861 11:03:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:07.148 11:03:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:07.148 11:03:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:07.148 11:03:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:31:07.148 11:03:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:07.408 11:03:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:07.408 11:03:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:31:07.408 11:03:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:07.408 11:03:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:31:07.408 11:03:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:31:07.408 [2024-11-28 11:03:46.259562] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:31:07.408 [2024-11-28 11:03:46.284379] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029ec0/0x7f572cf48940) succeed. 00:31:07.667 [2024-11-28 11:03:46.294103] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a040/0x7f572cf04940) succeed. 00:31:07.667 11:03:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:07.927 11:03:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:07.927 11:03:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:08.186 11:03:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:08.186 11:03:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:08.446 11:03:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:08.446 [2024-11-28 11:03:47.241160] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:08.446 11:03:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:08.705 11:03:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:31:08.705 11:03:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:31:08.705 11:03:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:08.705 11:03:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:31:10.084 Initializing NVMe Controllers 00:31:10.084 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:31:10.084 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:31:10.084 Initialization complete. Launching workers. 00:31:10.084 ======================================================== 00:31:10.084 Latency(us) 00:31:10.084 Device Information : IOPS MiB/s Average min max 00:31:10.084 PCIE (0000:d8:00.0) NSID 1 from core 0: 92601.53 361.72 345.12 40.72 7224.03 00:31:10.084 ======================================================== 00:31:10.084 Total : 92601.53 361.72 345.12 40.72 7224.03 00:31:10.084 00:31:10.344 11:03:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:13.634 Initializing NVMe Controllers 00:31:13.634 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:13.634 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:13.634 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:13.634 Initialization complete. Launching workers. 00:31:13.634 ======================================================== 00:31:13.634 Latency(us) 00:31:13.634 Device Information : IOPS MiB/s Average min max 00:31:13.634 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5991.76 23.41 166.50 58.63 7011.34 00:31:13.634 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4641.19 18.13 215.03 83.00 7060.20 00:31:13.634 ======================================================== 00:31:13.634 Total : 10632.95 41.53 187.68 58.63 7060.20 00:31:13.634 00:31:13.893 11:03:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:17.188 Initializing NVMe Controllers 00:31:17.188 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.188 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:17.188 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:17.188 Initialization complete. Launching workers. 00:31:17.188 ======================================================== 00:31:17.188 Latency(us) 00:31:17.188 Device Information : IOPS MiB/s Average min max 00:31:17.188 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16099.48 62.89 1979.36 554.12 7349.01 00:31:17.188 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3982.89 15.56 8001.14 5467.70 14926.24 00:31:17.188 ======================================================== 00:31:17.188 Total : 20082.38 78.45 3173.65 554.12 14926.24 00:31:17.188 00:31:17.447 11:03:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:31:17.447 11:03:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:22.728 Initializing NVMe Controllers 00:31:22.728 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.728 Controller IO queue size 128, less than required. 00:31:22.728 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.728 Controller IO queue size 128, less than required. 00:31:22.728 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.728 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:22.728 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:22.728 Initialization complete. Launching workers. 00:31:22.728 ======================================================== 00:31:22.728 Latency(us) 00:31:22.728 Device Information : IOPS MiB/s Average min max 00:31:22.728 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3287.50 821.87 41214.23 22123.18 406849.49 00:31:22.728 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3457.00 864.25 36525.60 21508.94 240279.38 00:31:22.728 ======================================================== 00:31:22.728 Total : 6744.50 1686.12 38811.00 21508.94 406849.49 00:31:22.728 00:31:22.728 11:04:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:31:22.728 No valid NVMe controllers or AIO or URING devices found 00:31:22.728 Initializing NVMe Controllers 00:31:22.728 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.728 Controller IO queue size 128, less than required. 00:31:22.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.729 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:22.729 Controller IO queue size 128, less than required. 00:31:22.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.729 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:22.729 WARNING: Some requested NVMe devices were skipped 00:31:22.729 11:04:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:31:28.003 Initializing NVMe Controllers 00:31:28.003 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.003 Controller IO queue size 128, less than required. 00:31:28.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.003 Controller IO queue size 128, less than required. 00:31:28.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.003 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:28.003 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:28.003 Initialization complete. Launching workers. 00:31:28.003 00:31:28.003 ==================== 00:31:28.003 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:28.003 RDMA transport: 00:31:28.003 dev name: mlx5_0 00:31:28.003 polls: 319069 00:31:28.003 idle_polls: 316661 00:31:28.003 completions: 36670 00:31:28.003 queued_requests: 1 00:31:28.003 total_send_wrs: 18335 00:31:28.003 send_doorbell_updates: 2204 00:31:28.003 total_recv_wrs: 18462 00:31:28.003 recv_doorbell_updates: 2206 00:31:28.003 --------------------------------- 00:31:28.003 00:31:28.003 ==================== 00:31:28.003 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:28.003 RDMA transport: 00:31:28.003 dev name: mlx5_0 00:31:28.003 polls: 318384 00:31:28.003 idle_polls: 318143 00:31:28.003 completions: 17346 00:31:28.003 queued_requests: 1 00:31:28.003 total_send_wrs: 8673 00:31:28.003 send_doorbell_updates: 235 00:31:28.003 total_recv_wrs: 8800 00:31:28.003 recv_doorbell_updates: 236 00:31:28.003 --------------------------------- 00:31:28.003 ======================================================== 00:31:28.003 Latency(us) 00:31:28.003 Device Information : IOPS MiB/s Average min max 00:31:28.003 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4583.50 1145.88 28444.14 14502.43 387033.60 00:31:28.003 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2168.00 542.00 60586.47 31779.52 397610.63 00:31:28.003 ======================================================== 00:31:28.003 Total : 6751.50 1687.88 38765.49 14502.43 397610.63 00:31:28.003 00:31:28.003 11:04:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:28.003 11:04:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:28.003 11:04:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:28.003 11:04:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:31:28.003 11:04:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=441d0a50-9f47-43a6-ace4-2380b313e5ff 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 441d0a50-9f47-43a6-ace4-2380b313e5ff 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=441d0a50-9f47-43a6-ace4-2380b313e5ff 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:34.576 { 00:31:34.576 "uuid": "441d0a50-9f47-43a6-ace4-2380b313e5ff", 00:31:34.576 "name": "lvs_0", 00:31:34.576 "base_bdev": "Nvme0n1", 00:31:34.576 "total_data_clusters": 476466, 00:31:34.576 "free_clusters": 476466, 00:31:34.576 "block_size": 512, 00:31:34.576 "cluster_size": 4194304 00:31:34.576 } 00:31:34.576 ]' 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="441d0a50-9f47-43a6-ace4-2380b313e5ff") .free_clusters' 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=476466 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="441d0a50-9f47-43a6-ace4-2380b313e5ff") .cluster_size' 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1905864 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1905864 00:31:34.576 1905864 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:34.576 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 441d0a50-9f47-43a6-ace4-2380b313e5ff lbd_0 20480 00:31:34.835 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=62930fc8-f166-4c1d-b80c-45c175929b0a 00:31:34.835 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 62930fc8-f166-4c1d-b80c-45c175929b0a lvs_n_0 00:31:36.741 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=0133cea3-00ea-4510-98d0-d8ba8012174b 00:31:36.741 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 0133cea3-00ea-4510-98d0-d8ba8012174b 00:31:36.741 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=0133cea3-00ea-4510-98d0-d8ba8012174b 00:31:36.741 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:36.741 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:36.741 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:36.741 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:36.741 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:36.741 { 00:31:36.741 "uuid": "441d0a50-9f47-43a6-ace4-2380b313e5ff", 00:31:36.741 "name": "lvs_0", 00:31:36.741 "base_bdev": "Nvme0n1", 00:31:36.741 "total_data_clusters": 476466, 00:31:36.741 "free_clusters": 471346, 00:31:36.742 "block_size": 512, 00:31:36.742 "cluster_size": 4194304 00:31:36.742 }, 00:31:36.742 { 00:31:36.742 "uuid": "0133cea3-00ea-4510-98d0-d8ba8012174b", 00:31:36.742 "name": "lvs_n_0", 00:31:36.742 "base_bdev": "62930fc8-f166-4c1d-b80c-45c175929b0a", 00:31:36.742 "total_data_clusters": 5114, 00:31:36.742 "free_clusters": 5114, 00:31:36.742 "block_size": 512, 00:31:36.742 "cluster_size": 4194304 00:31:36.742 } 00:31:36.742 ]' 00:31:36.742 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="0133cea3-00ea-4510-98d0-d8ba8012174b") .free_clusters' 00:31:36.742 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:36.742 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="0133cea3-00ea-4510-98d0-d8ba8012174b") .cluster_size' 00:31:36.742 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:36.742 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:36.742 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:36.742 20456 00:31:36.742 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:36.742 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0133cea3-00ea-4510-98d0-d8ba8012174b lbd_nest_0 20456 00:31:36.742 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=c40bceb1-c66c-479d-86e4-04f5b0fa901b 00:31:36.742 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:37.003 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:37.003 11:04:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 c40bceb1-c66c-479d-86e4-04f5b0fa901b 00:31:37.263 11:04:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:37.522 11:04:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:37.522 11:04:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:37.522 11:04:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:37.522 11:04:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:37.522 11:04:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:49.737 Initializing NVMe Controllers 00:31:49.737 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.737 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:49.737 Initialization complete. Launching workers. 00:31:49.737 ======================================================== 00:31:49.737 Latency(us) 00:31:49.737 Device Information : IOPS MiB/s Average min max 00:31:49.737 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5135.20 2.51 194.10 79.49 7034.25 00:31:49.737 ======================================================== 00:31:49.737 Total : 5135.20 2.51 194.10 79.49 7034.25 00:31:49.737 00:31:49.737 11:04:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:49.737 11:04:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:01.959 Initializing NVMe Controllers 00:32:01.959 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:01.959 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:01.959 Initialization complete. Launching workers. 00:32:01.959 ======================================================== 00:32:01.959 Latency(us) 00:32:01.959 Device Information : IOPS MiB/s Average min max 00:32:01.959 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2477.86 309.73 402.84 175.06 8175.87 00:32:01.959 ======================================================== 00:32:01.959 Total : 2477.86 309.73 402.84 175.06 8175.87 00:32:01.959 00:32:01.959 11:04:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:01.959 11:04:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:01.959 11:04:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:11.947 Initializing NVMe Controllers 00:32:11.947 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:11.947 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:11.947 Initialization complete. Launching workers. 00:32:11.947 ======================================================== 00:32:11.947 Latency(us) 00:32:11.947 Device Information : IOPS MiB/s Average min max 00:32:11.947 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10177.80 4.97 3143.52 1084.18 10062.46 00:32:11.947 ======================================================== 00:32:11.947 Total : 10177.80 4.97 3143.52 1084.18 10062.46 00:32:11.947 00:32:12.208 11:04:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:12.208 11:04:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:24.638 Initializing NVMe Controllers 00:32:24.638 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:24.638 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:24.638 Initialization complete. Launching workers. 00:32:24.638 ======================================================== 00:32:24.638 Latency(us) 00:32:24.638 Device Information : IOPS MiB/s Average min max 00:32:24.638 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3987.30 498.41 8029.13 5893.10 25877.44 00:32:24.638 ======================================================== 00:32:24.638 Total : 3987.30 498.41 8029.13 5893.10 25877.44 00:32:24.638 00:32:24.638 11:05:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:24.638 11:05:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:24.638 11:05:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:36.848 Initializing NVMe Controllers 00:32:36.848 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:36.848 Controller IO queue size 128, less than required. 00:32:36.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:36.848 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:36.848 Initialization complete. Launching workers. 00:32:36.848 ======================================================== 00:32:36.848 Latency(us) 00:32:36.848 Device Information : IOPS MiB/s Average min max 00:32:36.848 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16547.18 8.08 7738.62 2338.10 15886.58 00:32:36.848 ======================================================== 00:32:36.848 Total : 16547.18 8.08 7738.62 2338.10 15886.58 00:32:36.848 00:32:36.848 11:05:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:36.848 11:05:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:46.834 Initializing NVMe Controllers 00:32:46.834 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:46.834 Controller IO queue size 128, less than required. 00:32:46.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:46.834 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:46.834 Initialization complete. Launching workers. 00:32:46.834 ======================================================== 00:32:46.834 Latency(us) 00:32:46.834 Device Information : IOPS MiB/s Average min max 00:32:46.834 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9790.64 1223.83 13074.38 3761.36 91703.91 00:32:46.834 ======================================================== 00:32:46.834 Total : 9790.64 1223.83 13074.38 3761.36 91703.91 00:32:46.834 00:32:46.834 11:05:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:46.834 11:05:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c40bceb1-c66c-479d-86e4-04f5b0fa901b 00:32:47.770 11:05:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:47.770 11:05:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 62930fc8-f166-4c1d-b80c-45c175929b0a 00:32:48.030 11:05:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:48.290 rmmod nvme_rdma 00:32:48.290 rmmod nvme_fabrics 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1847579 ']' 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1847579 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1847579 ']' 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1847579 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1847579 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:48.290 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1847579' 00:32:48.291 killing process with pid 1847579 00:32:48.291 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1847579 00:32:48.291 11:05:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1847579 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:32:52.486 00:32:52.486 real 1m57.113s 00:32:52.486 user 7m17.248s 00:32:52.486 sys 0m9.116s 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:52.486 ************************************ 00:32:52.486 END TEST nvmf_perf 00:32:52.486 ************************************ 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.486 ************************************ 00:32:52.486 START TEST nvmf_fio_host 00:32:52.486 ************************************ 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:32:52.486 * Looking for test storage... 00:32:52.486 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:32:52.486 11:05:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.486 --rc genhtml_branch_coverage=1 00:32:52.486 --rc genhtml_function_coverage=1 00:32:52.486 --rc genhtml_legend=1 00:32:52.486 --rc geninfo_all_blocks=1 00:32:52.486 --rc geninfo_unexecuted_blocks=1 00:32:52.486 00:32:52.486 ' 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.486 --rc genhtml_branch_coverage=1 00:32:52.486 --rc genhtml_function_coverage=1 00:32:52.486 --rc genhtml_legend=1 00:32:52.486 --rc geninfo_all_blocks=1 00:32:52.486 --rc geninfo_unexecuted_blocks=1 00:32:52.486 00:32:52.486 ' 00:32:52.486 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.486 --rc genhtml_branch_coverage=1 00:32:52.487 --rc genhtml_function_coverage=1 00:32:52.487 --rc genhtml_legend=1 00:32:52.487 --rc geninfo_all_blocks=1 00:32:52.487 --rc geninfo_unexecuted_blocks=1 00:32:52.487 00:32:52.487 ' 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:52.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.487 --rc genhtml_branch_coverage=1 00:32:52.487 --rc genhtml_function_coverage=1 00:32:52.487 --rc genhtml_legend=1 00:32:52.487 --rc geninfo_all_blocks=1 00:32:52.487 --rc geninfo_unexecuted_blocks=1 00:32:52.487 00:32:52.487 ' 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:52.487 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:52.487 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:52.488 11:05:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:00.619 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:00.619 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:00.619 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:00.619 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:00.619 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:00.620 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:00.620 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:00.620 altname enp217s0f0np0 00:33:00.620 altname ens818f0np0 00:33:00.620 inet 192.168.100.8/24 scope global mlx_0_0 00:33:00.620 valid_lft forever preferred_lft forever 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:00.620 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:00.620 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:00.620 altname enp217s0f1np1 00:33:00.620 altname ens818f1np1 00:33:00.620 inet 192.168.100.9/24 scope global mlx_0_1 00:33:00.620 valid_lft forever preferred_lft forever 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:00.620 192.168.100.9' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:00.620 192.168.100.9' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:00.620 192.168.100.9' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1869577 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1869577 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1869577 ']' 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.620 11:05:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.879 [2024-11-28 11:05:39.529511] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:33:00.879 [2024-11-28 11:05:39.529614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.879 [2024-11-28 11:05:39.684110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:01.138 [2024-11-28 11:05:39.784352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.138 [2024-11-28 11:05:39.784399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.138 [2024-11-28 11:05:39.784412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.138 [2024-11-28 11:05:39.784424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.138 [2024-11-28 11:05:39.784434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.138 [2024-11-28 11:05:39.786877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.138 [2024-11-28 11:05:39.786949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:01.138 [2024-11-28 11:05:39.786971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.138 [2024-11-28 11:05:39.786979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:01.707 11:05:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.707 11:05:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:33:01.707 11:05:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:01.707 [2024-11-28 11:05:40.550320] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fcc0eb01940) succeed. 00:33:01.707 [2024-11-28 11:05:40.560060] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fcc0e1bd940) succeed. 00:33:01.966 11:05:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:01.966 11:05:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:01.966 11:05:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.226 11:05:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:02.486 Malloc1 00:33:02.486 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:02.486 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:02.745 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:03.004 [2024-11-28 11:05:41.689804] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:03.004 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:03.264 11:05:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:03.523 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:03.523 fio-3.35 00:33:03.523 Starting 1 thread 00:33:06.057 00:33:06.057 test: (groupid=0, jobs=1): err= 0: pid=1870255: Thu Nov 28 11:05:44 2024 00:33:06.057 read: IOPS=15.3k, BW=59.6MiB/s (62.5MB/s)(119MiB/2004msec) 00:33:06.057 slat (nsec): min=1491, max=69402, avg=1641.91, stdev=703.18 00:33:06.057 clat (usec): min=2999, max=7544, avg=4172.94, stdev=113.42 00:33:06.057 lat (usec): min=3020, max=7545, avg=4174.58, stdev=113.38 00:33:06.057 clat percentiles (usec): 00:33:06.057 | 1.00th=[ 3752], 5.00th=[ 4146], 10.00th=[ 4146], 20.00th=[ 4146], 00:33:06.057 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4178], 60.00th=[ 4178], 00:33:06.057 | 70.00th=[ 4178], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4228], 00:33:06.057 | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 5473], 99.95th=[ 6521], 00:33:06.057 | 99.99th=[ 7504] 00:33:06.057 bw ( KiB/s): min=59840, max=61992, per=99.94%, avg=60988.00, stdev=1027.26, samples=4 00:33:06.057 iops : min=14960, max=15498, avg=15247.00, stdev=256.81, samples=4 00:33:06.057 write: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(120MiB/2004msec); 0 zone resets 00:33:06.057 slat (nsec): min=1541, max=17602, avg=1724.07, stdev=529.72 00:33:06.057 clat (usec): min=3021, max=7537, avg=4172.14, stdev=120.17 00:33:06.057 lat (usec): min=3035, max=7539, avg=4173.86, stdev=120.14 00:33:06.057 clat percentiles (usec): 00:33:06.057 | 1.00th=[ 3752], 5.00th=[ 4146], 10.00th=[ 4146], 20.00th=[ 4146], 00:33:06.057 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4178], 60.00th=[ 4178], 00:33:06.057 | 70.00th=[ 4178], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4228], 00:33:06.057 | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 5932], 99.95th=[ 6915], 00:33:06.057 | 99.99th=[ 7504] 00:33:06.057 bw ( KiB/s): min=60263, max=62072, per=99.94%, avg=61071.75, stdev=763.03, samples=4 00:33:06.057 iops : min=15065, max=15518, avg=15267.75, stdev=191.02, samples=4 00:33:06.057 lat (msec) : 4=1.45%, 10=98.55% 00:33:06.057 cpu : usr=99.45%, sys=0.10%, ctx=16, majf=0, minf=1280 00:33:06.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:06.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:06.058 issued rwts: total=30572,30615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:06.058 00:33:06.058 Run status group 0 (all jobs): 00:33:06.058 READ: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=119MiB (125MB), run=2004-2004msec 00:33:06.058 WRITE: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=120MiB (125MB), run=2004-2004msec 00:33:06.316 ----------------------------------------------------- 00:33:06.316 Suppressions used: 00:33:06.316 count bytes template 00:33:06.316 1 63 /usr/src/fio/parse.c 00:33:06.316 1 8 libtcmalloc_minimal.so 00:33:06.316 ----------------------------------------------------- 00:33:06.316 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:06.316 11:05:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:06.574 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:06.574 fio-3.35 00:33:06.574 Starting 1 thread 00:33:09.110 00:33:09.110 test: (groupid=0, jobs=1): err= 0: pid=1870888: Thu Nov 28 11:05:47 2024 00:33:09.110 read: IOPS=12.2k, BW=190MiB/s (199MB/s)(373MiB/1963msec) 00:33:09.110 slat (nsec): min=2490, max=59369, avg=2893.31, stdev=1224.39 00:33:09.110 clat (usec): min=527, max=8973, avg=1943.10, stdev=1597.28 00:33:09.110 lat (usec): min=530, max=8975, avg=1946.00, stdev=1597.65 00:33:09.110 clat percentiles (usec): 00:33:09.110 | 1.00th=[ 807], 5.00th=[ 914], 10.00th=[ 988], 20.00th=[ 1090], 00:33:09.110 | 30.00th=[ 1156], 40.00th=[ 1254], 50.00th=[ 1369], 60.00th=[ 1516], 00:33:09.110 | 70.00th=[ 1663], 80.00th=[ 1844], 90.00th=[ 5604], 95.00th=[ 5932], 00:33:09.110 | 99.00th=[ 7570], 99.50th=[ 8160], 99.90th=[ 8717], 99.95th=[ 8848], 00:33:09.110 | 99.99th=[ 8979] 00:33:09.110 bw ( KiB/s): min=92448, max=96384, per=48.46%, avg=94328.00, stdev=1640.32, samples=4 00:33:09.110 iops : min= 5778, max= 6024, avg=5895.50, stdev=102.52, samples=4 00:33:09.110 write: IOPS=6884, BW=108MiB/s (113MB/s)(192MiB/1782msec); 0 zone resets 00:33:09.110 slat (usec): min=26, max=107, avg=28.97, stdev= 5.21 00:33:09.110 clat (usec): min=5413, max=29143, avg=15118.77, stdev=2366.03 00:33:09.110 lat (usec): min=5446, max=29170, avg=15147.74, stdev=2365.68 00:33:09.110 clat percentiles (usec): 00:33:09.110 | 1.00th=[ 7373], 5.00th=[11994], 10.00th=[12780], 20.00th=[13566], 00:33:09.110 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14877], 60.00th=[15401], 00:33:09.110 | 70.00th=[15926], 80.00th=[16581], 90.00th=[17957], 95.00th=[19268], 00:33:09.110 | 99.00th=[21890], 99.50th=[22938], 99.90th=[28443], 99.95th=[28705], 00:33:09.110 | 99.99th=[29230] 00:33:09.110 bw ( KiB/s): min=95968, max=98944, per=88.26%, avg=97224.00, stdev=1498.97, samples=4 00:33:09.110 iops : min= 5998, max= 6184, avg=6076.50, stdev=93.69, samples=4 00:33:09.110 lat (usec) : 750=0.17%, 1000=7.23% 00:33:09.110 lat (msec) : 2=47.64%, 4=3.10%, 10=8.47%, 20=32.39%, 50=1.00% 00:33:09.110 cpu : usr=95.76%, sys=2.54%, ctx=184, majf=0, minf=10843 00:33:09.110 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:33:09.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:09.110 issued rwts: total=23880,12268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:09.110 00:33:09.110 Run status group 0 (all jobs): 00:33:09.110 READ: bw=190MiB/s (199MB/s), 190MiB/s-190MiB/s (199MB/s-199MB/s), io=373MiB (391MB), run=1963-1963msec 00:33:09.110 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=192MiB (201MB), run=1782-1782msec 00:33:09.370 ----------------------------------------------------- 00:33:09.370 Suppressions used: 00:33:09.370 count bytes template 00:33:09.370 1 63 /usr/src/fio/parse.c 00:33:09.370 191 18336 /usr/src/fio/iolog.c 00:33:09.370 1 8 libtcmalloc_minimal.so 00:33:09.370 ----------------------------------------------------- 00:33:09.370 00:33:09.370 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:33:09.630 11:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:33:12.920 Nvme0n1 00:33:12.920 11:05:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=a1235c1c-e42f-4ada-9430-4c2eafa2c0a2 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb a1235c1c-e42f-4ada-9430-4c2eafa2c0a2 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=a1235c1c-e42f-4ada-9430-4c2eafa2c0a2 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:19.484 { 00:33:19.484 "uuid": "a1235c1c-e42f-4ada-9430-4c2eafa2c0a2", 00:33:19.484 "name": "lvs_0", 00:33:19.484 "base_bdev": "Nvme0n1", 00:33:19.484 "total_data_clusters": 1862, 00:33:19.484 "free_clusters": 1862, 00:33:19.484 "block_size": 512, 00:33:19.484 "cluster_size": 1073741824 00:33:19.484 } 00:33:19.484 ]' 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="a1235c1c-e42f-4ada-9430-4c2eafa2c0a2") .free_clusters' 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1862 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="a1235c1c-e42f-4ada-9430-4c2eafa2c0a2") .cluster_size' 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1906688 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1906688 00:33:19.484 1906688 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:33:19.484 6a4b5dd5-03a7-451e-9b97-fff6bebbd26b 00:33:19.484 11:05:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:19.484 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:19.744 11:05:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:20.349 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:20.349 fio-3.35 00:33:20.349 Starting 1 thread 00:33:22.884 00:33:22.884 test: (groupid=0, jobs=1): err= 0: pid=1873191: Thu Nov 28 11:06:01 2024 00:33:22.884 read: IOPS=8700, BW=34.0MiB/s (35.6MB/s)(68.1MiB/2005msec) 00:33:22.884 slat (nsec): min=1513, max=40075, avg=1734.81, stdev=482.86 00:33:22.884 clat (usec): min=205, max=332973, avg=7286.90, stdev=19838.25 00:33:22.884 lat (usec): min=207, max=332976, avg=7288.63, stdev=19838.31 00:33:22.884 clat percentiles (msec): 00:33:22.884 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:33:22.884 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:33:22.884 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:33:22.884 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 334], 99.95th=[ 334], 00:33:22.884 | 99.99th=[ 334] 00:33:22.884 bw ( KiB/s): min=12792, max=42504, per=99.91%, avg=34768.00, stdev=14654.13, samples=4 00:33:22.884 iops : min= 3198, max=10626, avg=8692.00, stdev=3663.53, samples=4 00:33:22.884 write: IOPS=8695, BW=34.0MiB/s (35.6MB/s)(68.1MiB/2005msec); 0 zone resets 00:33:22.884 slat (nsec): min=1577, max=17984, avg=1831.06, stdev=337.05 00:33:22.884 clat (usec): min=171, max=333394, avg=7254.07, stdev=19314.25 00:33:22.884 lat (usec): min=172, max=333400, avg=7255.90, stdev=19314.33 00:33:22.884 clat percentiles (msec): 00:33:22.884 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:33:22.884 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:33:22.884 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:33:22.884 | 99.00th=[ 8], 99.50th=[ 10], 99.90th=[ 334], 99.95th=[ 334], 00:33:22.884 | 99.99th=[ 334] 00:33:22.884 bw ( KiB/s): min=13304, max=42184, per=99.86%, avg=34734.00, stdev=14289.89, samples=4 00:33:22.884 iops : min= 3326, max=10546, avg=8683.50, stdev=3572.47, samples=4 00:33:22.884 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:33:22.884 lat (msec) : 2=0.03%, 4=0.22%, 10=99.31%, 20=0.04%, 500=0.37% 00:33:22.884 cpu : usr=99.25%, sys=0.35%, ctx=16, majf=0, minf=1937 00:33:22.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:22.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:22.884 issued rwts: total=17444,17434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.884 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:22.884 00:33:22.884 Run status group 0 (all jobs): 00:33:22.884 READ: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.5MB), run=2005-2005msec 00:33:22.884 WRITE: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2005-2005msec 00:33:22.884 ----------------------------------------------------- 00:33:22.884 Suppressions used: 00:33:22.884 count bytes template 00:33:22.884 1 64 /usr/src/fio/parse.c 00:33:22.884 1 8 libtcmalloc_minimal.so 00:33:22.884 ----------------------------------------------------- 00:33:22.884 00:33:22.884 11:06:01 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:23.143 11:06:01 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=8d19a723-6622-41e8-afcb-7b41f0122d1d 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 8d19a723-6622-41e8-afcb-7b41f0122d1d 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=8d19a723-6622-41e8-afcb-7b41f0122d1d 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:24.521 { 00:33:24.521 "uuid": "a1235c1c-e42f-4ada-9430-4c2eafa2c0a2", 00:33:24.521 "name": "lvs_0", 00:33:24.521 "base_bdev": "Nvme0n1", 00:33:24.521 "total_data_clusters": 1862, 00:33:24.521 "free_clusters": 0, 00:33:24.521 "block_size": 512, 00:33:24.521 "cluster_size": 1073741824 00:33:24.521 }, 00:33:24.521 { 00:33:24.521 "uuid": "8d19a723-6622-41e8-afcb-7b41f0122d1d", 00:33:24.521 "name": "lvs_n_0", 00:33:24.521 "base_bdev": "6a4b5dd5-03a7-451e-9b97-fff6bebbd26b", 00:33:24.521 "total_data_clusters": 476206, 00:33:24.521 "free_clusters": 476206, 00:33:24.521 "block_size": 512, 00:33:24.521 "cluster_size": 4194304 00:33:24.521 } 00:33:24.521 ]' 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8d19a723-6622-41e8-afcb-7b41f0122d1d") .free_clusters' 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=476206 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8d19a723-6622-41e8-afcb-7b41f0122d1d") .cluster_size' 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1904824 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1904824 00:33:24.521 1904824 00:33:24.521 11:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:33:27.057 1cfda144-f734-47f7-818f-9b47719afe32 00:33:27.057 11:06:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:27.317 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:27.576 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:27.835 11:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:28.095 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:28.095 fio-3.35 00:33:28.095 Starting 1 thread 00:33:30.631 00:33:30.631 test: (groupid=0, jobs=1): err= 0: pid=1875190: Thu Nov 28 11:06:09 2024 00:33:30.631 read: IOPS=8910, BW=34.8MiB/s (36.5MB/s)(69.8MiB/2006msec) 00:33:30.631 slat (nsec): min=1497, max=26763, avg=1670.96, stdev=333.87 00:33:30.631 clat (usec): min=3918, max=12281, avg=7088.55, stdev=257.33 00:33:30.631 lat (usec): min=3921, max=12282, avg=7090.22, stdev=257.29 00:33:30.631 clat percentiles (usec): 00:33:30.631 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 7046], 20.00th=[ 7046], 00:33:30.631 | 30.00th=[ 7046], 40.00th=[ 7046], 50.00th=[ 7111], 60.00th=[ 7111], 00:33:30.631 | 70.00th=[ 7111], 80.00th=[ 7111], 90.00th=[ 7177], 95.00th=[ 7177], 00:33:30.631 | 99.00th=[ 8029], 99.50th=[ 8291], 99.90th=[10421], 99.95th=[12125], 00:33:30.631 | 99.99th=[12256] 00:33:30.631 bw ( KiB/s): min=34112, max=36368, per=99.95%, avg=35626.00, stdev=1023.82, samples=4 00:33:30.631 iops : min= 8528, max= 9092, avg=8906.50, stdev=255.96, samples=4 00:33:30.631 write: IOPS=8923, BW=34.9MiB/s (36.5MB/s)(69.9MiB/2006msec); 0 zone resets 00:33:30.631 slat (nsec): min=1520, max=12347, avg=1755.87, stdev=319.95 00:33:30.631 clat (usec): min=3928, max=12295, avg=7116.29, stdev=271.39 00:33:30.631 lat (usec): min=3932, max=12297, avg=7118.04, stdev=271.36 00:33:30.631 clat percentiles (usec): 00:33:30.631 | 1.00th=[ 6259], 5.00th=[ 7046], 10.00th=[ 7046], 20.00th=[ 7046], 00:33:30.631 | 30.00th=[ 7111], 40.00th=[ 7111], 50.00th=[ 7111], 60.00th=[ 7111], 00:33:30.631 | 70.00th=[ 7111], 80.00th=[ 7111], 90.00th=[ 7177], 95.00th=[ 7177], 00:33:30.631 | 99.00th=[ 8029], 99.50th=[ 8455], 99.90th=[11469], 99.95th=[12125], 00:33:30.631 | 99.99th=[12256] 00:33:30.631 bw ( KiB/s): min=34912, max=36128, per=99.94%, avg=35670.00, stdev=567.02, samples=4 00:33:30.631 iops : min= 8728, max= 9032, avg=8917.50, stdev=141.75, samples=4 00:33:30.631 lat (msec) : 4=0.03%, 10=99.83%, 20=0.14% 00:33:30.631 cpu : usr=99.40%, sys=0.25%, ctx=15, majf=0, minf=1736 00:33:30.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:30.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:30.631 issued rwts: total=17875,17900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:30.631 00:33:30.631 Run status group 0 (all jobs): 00:33:30.631 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.8MiB (73.2MB), run=2006-2006msec 00:33:30.631 WRITE: bw=34.9MiB/s (36.5MB/s), 34.9MiB/s-34.9MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2006-2006msec 00:33:30.891 ----------------------------------------------------- 00:33:30.891 Suppressions used: 00:33:30.891 count bytes template 00:33:30.891 1 64 /usr/src/fio/parse.c 00:33:30.891 1 8 libtcmalloc_minimal.so 00:33:30.891 ----------------------------------------------------- 00:33:30.891 00:33:30.891 11:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:31.151 11:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:31.151 11:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:41.141 11:06:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:41.141 11:06:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:46.425 11:06:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:46.425 11:06:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:49.717 11:06:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:49.717 11:06:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:49.717 11:06:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:49.717 11:06:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:49.717 11:06:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:49.717 11:06:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:49.717 11:06:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:49.717 11:06:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:49.717 11:06:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:49.717 11:06:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:49.717 rmmod nvme_rdma 00:33:49.717 rmmod nvme_fabrics 00:33:49.717 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:49.717 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:49.717 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:49.717 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1869577 ']' 00:33:49.717 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1869577 00:33:49.717 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1869577 ']' 00:33:49.717 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1869577 00:33:49.717 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:49.717 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:49.718 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1869577 00:33:49.718 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:49.718 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:49.718 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1869577' 00:33:49.718 killing process with pid 1869577 00:33:49.718 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1869577 00:33:49.718 11:06:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1869577 00:33:51.098 11:06:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:51.098 11:06:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:51.098 00:33:51.098 real 0m59.046s 00:33:51.098 user 4m5.250s 00:33:51.098 sys 0m13.126s 00:33:51.098 11:06:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.098 11:06:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.098 ************************************ 00:33:51.098 END TEST nvmf_fio_host 00:33:51.098 ************************************ 00:33:51.098 11:06:29 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:33:51.098 11:06:29 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:51.098 11:06:29 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.098 11:06:29 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.098 ************************************ 00:33:51.098 START TEST nvmf_failover 00:33:51.098 ************************************ 00:33:51.098 11:06:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:33:51.358 * Looking for test storage... 00:33:51.358 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:51.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.358 --rc genhtml_branch_coverage=1 00:33:51.358 --rc genhtml_function_coverage=1 00:33:51.358 --rc genhtml_legend=1 00:33:51.358 --rc geninfo_all_blocks=1 00:33:51.358 --rc geninfo_unexecuted_blocks=1 00:33:51.358 00:33:51.358 ' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:51.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.358 --rc genhtml_branch_coverage=1 00:33:51.358 --rc genhtml_function_coverage=1 00:33:51.358 --rc genhtml_legend=1 00:33:51.358 --rc geninfo_all_blocks=1 00:33:51.358 --rc geninfo_unexecuted_blocks=1 00:33:51.358 00:33:51.358 ' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:51.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.358 --rc genhtml_branch_coverage=1 00:33:51.358 --rc genhtml_function_coverage=1 00:33:51.358 --rc genhtml_legend=1 00:33:51.358 --rc geninfo_all_blocks=1 00:33:51.358 --rc geninfo_unexecuted_blocks=1 00:33:51.358 00:33:51.358 ' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:51.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.358 --rc genhtml_branch_coverage=1 00:33:51.358 --rc genhtml_function_coverage=1 00:33:51.358 --rc genhtml_legend=1 00:33:51.358 --rc geninfo_all_blocks=1 00:33:51.358 --rc geninfo_unexecuted_blocks=1 00:33:51.358 00:33:51.358 ' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:51.358 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:51.358 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:51.359 11:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:59.482 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.482 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:59.482 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:59.483 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:59.483 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:59.483 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:59.483 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:59.483 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:59.743 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:59.743 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:59.743 altname enp217s0f0np0 00:33:59.743 altname ens818f0np0 00:33:59.743 inet 192.168.100.8/24 scope global mlx_0_0 00:33:59.743 valid_lft forever preferred_lft forever 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:59.743 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:59.743 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:59.743 altname enp217s0f1np1 00:33:59.743 altname ens818f1np1 00:33:59.743 inet 192.168.100.9/24 scope global mlx_0_1 00:33:59.743 valid_lft forever preferred_lft forever 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:59.743 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:59.744 192.168.100.9' 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:59.744 192.168.100.9' 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:59.744 192.168.100.9' 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1882810 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1882810 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1882810 ']' 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:59.744 11:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:00.003 [2024-11-28 11:06:38.629025] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:34:00.003 [2024-11-28 11:06:38.629117] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:00.003 [2024-11-28 11:06:38.784113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:00.263 [2024-11-28 11:06:38.884494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:00.263 [2024-11-28 11:06:38.884541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:00.263 [2024-11-28 11:06:38.884554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:00.263 [2024-11-28 11:06:38.884567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:00.263 [2024-11-28 11:06:38.884576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:00.263 [2024-11-28 11:06:38.887009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:00.263 [2024-11-28 11:06:38.887069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.263 [2024-11-28 11:06:38.887076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:00.831 11:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.831 11:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:00.831 11:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:00.831 11:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:00.831 11:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:00.831 11:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:00.831 11:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:00.831 [2024-11-28 11:06:39.656965] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f1670376940) succeed. 00:34:00.831 [2024-11-28 11:06:39.666492] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f1670332940) succeed. 00:34:01.090 11:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:01.350 Malloc0 00:34:01.350 11:06:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:01.634 11:06:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:01.949 11:06:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:01.949 [2024-11-28 11:06:40.742281] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:01.949 11:06:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:02.307 [2024-11-28 11:06:40.918632] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:02.307 11:06:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:02.307 [2024-11-28 11:06:41.095281] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:34:02.307 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1883359 00:34:02.307 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:02.307 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:02.307 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1883359 /var/tmp/bdevperf.sock 00:34:02.307 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1883359 ']' 00:34:02.307 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:02.307 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:02.307 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:02.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:02.307 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:02.307 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:03.246 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:03.246 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:03.246 11:06:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:03.506 NVMe0n1 00:34:03.506 11:06:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:03.765 00:34:03.765 11:06:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1883624 00:34:03.765 11:06:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:03.765 11:06:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:04.701 11:06:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:04.960 11:06:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:08.251 11:06:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:08.251 00:34:08.251 11:06:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:08.512 11:06:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:11.804 11:06:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:11.804 [2024-11-28 11:06:50.432330] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:11.804 11:06:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:12.741 11:06:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:13.000 11:06:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1883624 00:34:19.568 { 00:34:19.568 "results": [ 00:34:19.568 { 00:34:19.568 "job": "NVMe0n1", 00:34:19.568 "core_mask": "0x1", 00:34:19.568 "workload": "verify", 00:34:19.568 "status": "finished", 00:34:19.568 "verify_range": { 00:34:19.568 "start": 0, 00:34:19.568 "length": 16384 00:34:19.568 }, 00:34:19.568 "queue_depth": 128, 00:34:19.568 "io_size": 4096, 00:34:19.568 "runtime": 15.006234, 00:34:19.568 "iops": 12242.4453730363, 00:34:19.568 "mibps": 47.82205223842305, 00:34:19.568 "io_failed": 4507, 00:34:19.568 "io_timeout": 0, 00:34:19.568 "avg_latency_us": 10179.464593178196, 00:34:19.568 "min_latency_us": 511.1808, 00:34:19.568 "max_latency_us": 1053609.1648 00:34:19.568 } 00:34:19.568 ], 00:34:19.568 "core_count": 1 00:34:19.568 } 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1883359 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1883359 ']' 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1883359 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1883359 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1883359' 00:34:19.568 killing process with pid 1883359 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1883359 00:34:19.568 11:06:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1883359 00:34:20.144 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:20.144 [2024-11-28 11:06:41.192940] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:34:20.144 [2024-11-28 11:06:41.193039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883359 ] 00:34:20.144 [2024-11-28 11:06:41.344370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.144 [2024-11-28 11:06:41.446209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.144 Running I/O for 15 seconds... 00:34:20.144 15616.00 IOPS, 61.00 MiB/s [2024-11-28T10:06:59.025Z] 8512.00 IOPS, 33.25 MiB/s [2024-11-28T10:06:59.025Z] [2024-11-28 11:06:44.742307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:20.144 [2024-11-28 11:06:44.742370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.144 [2024-11-28 11:06:44.742389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:20.144 [2024-11-28 11:06:44.742404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.144 [2024-11-28 11:06:44.742417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:20.144 [2024-11-28 11:06:44.742433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.144 [2024-11-28 11:06:44.742446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:20.144 [2024-11-28 11:06:44.742460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.144 [2024-11-28 11:06:44.744697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:34:20.144 [2024-11-28 11:06:44.744731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:34:20.144 [2024-11-28 11:06:44.744751] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:34:20.144 [2024-11-28 11:06:44.744767] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:34:20.144 [2024-11-28 11:06:44.744805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.144 [2024-11-28 11:06:44.744823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.144 [2024-11-28 11:06:44.744900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.144 [2024-11-28 11:06:44.744918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.144 [2024-11-28 11:06:44.744958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.144 [2024-11-28 11:06:44.744974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.144 [2024-11-28 11:06:44.745014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.144 [2024-11-28 11:06:44.745030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.144 [2024-11-28 11:06:44.745069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.144 [2024-11-28 11:06:44.745085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.144 [2024-11-28 11:06:44.745130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.144 [2024-11-28 11:06:44.745146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.745968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.745984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.145 [2024-11-28 11:06:44.746646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.746702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.746761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.746817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.746875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.746933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.746973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.746991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.747032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.747056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.747097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.747113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.747153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.747169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.747211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.747226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.747266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182700 00:34:20.145 [2024-11-28 11:06:44.747282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.145 [2024-11-28 11:06:44.747323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.747950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.747966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.748973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.748990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.749029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.749045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.749084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.749100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.749139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.749154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.749196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.749211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.749251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.749269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.749309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.749325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.749365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.749380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.146 [2024-11-28 11:06:44.749420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182700 00:34:20.146 [2024-11-28 11:06:44.749437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.749477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.749492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.749533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.749549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.749589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.749605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.749650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.749668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.749708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.749726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.749766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.749782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.749821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.749839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.749879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.749895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.749935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.749954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.749994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.750962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.750978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.751017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.751034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.751074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.751092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.751132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.751150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.751189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.751205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.751246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.751261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.751301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.751317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.751356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.751372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.751411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.751427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.751466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.751481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.147 [2024-11-28 11:06:44.751520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182700 00:34:20.147 [2024-11-28 11:06:44.751537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:44.751577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x182700 00:34:20.148 [2024-11-28 11:06:44.751593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:44.751640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x182700 00:34:20.148 [2024-11-28 11:06:44.751657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:44.751696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x182700 00:34:20.148 [2024-11-28 11:06:44.751714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:44.751753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x182700 00:34:20.148 [2024-11-28 11:06:44.751770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:44.751810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x182700 00:34:20.148 [2024-11-28 11:06:44.751827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:44.751867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x182700 00:34:20.148 [2024-11-28 11:06:44.751883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:44.751923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x182700 00:34:20.148 [2024-11-28 11:06:44.751938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:44.751977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x182700 00:34:20.148 [2024-11-28 11:06:44.751995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:44.780165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:20.148 [2024-11-28 11:06:44.780193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:20.148 [2024-11-28 11:06:44.780207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5872 len:8 PRP1 0x0 PRP2 0x0 00:34:20.148 [2024-11-28 11:06:44.780223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:44.780445] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:34:20.148 [2024-11-28 11:06:44.780517] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:34:20.148 [2024-11-28 11:06:44.783547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:20.148 [2024-11-28 11:06:44.828471] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:34:20.148 9866.33 IOPS, 38.54 MiB/s [2024-11-28T10:06:59.029Z] 11287.75 IOPS, 44.09 MiB/s [2024-11-28T10:06:59.029Z] 10774.40 IOPS, 42.09 MiB/s [2024-11-28T10:06:59.029Z] [2024-11-28 11:06:48.225344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x182f00 00:34:20.148 [2024-11-28 11:06:48.225458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x182f00 00:34:20.148 [2024-11-28 11:06:48.225490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x182f00 00:34:20.148 [2024-11-28 11:06:48.225522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x182f00 00:34:20.148 [2024-11-28 11:06:48.225555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182f00 00:34:20.148 [2024-11-28 11:06:48.225584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x182f00 00:34:20.148 [2024-11-28 11:06:48.225620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182f00 00:34:20.148 [2024-11-28 11:06:48.225649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x182f00 00:34:20.148 [2024-11-28 11:06:48.225677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.225977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.225995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.226009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.226023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.226038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.226052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.226067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.226081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.226095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.226109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.226122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.226137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.148 [2024-11-28 11:06:48.226150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.148 [2024-11-28 11:06:48.226164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.149 [2024-11-28 11:06:48.226911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182f00 00:34:20.149 [2024-11-28 11:06:48.226939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182f00 00:34:20.149 [2024-11-28 11:06:48.226968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.226982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182f00 00:34:20.149 [2024-11-28 11:06:48.226997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.227012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x182f00 00:34:20.149 [2024-11-28 11:06:48.227027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.227041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x182f00 00:34:20.149 [2024-11-28 11:06:48.227055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.227070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x182f00 00:34:20.149 [2024-11-28 11:06:48.227084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.149 [2024-11-28 11:06:48.227098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x182f00 00:34:20.149 [2024-11-28 11:06:48.227112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.227170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.227197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.227225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.227253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.227280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.227974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182f00 00:34:20.150 [2024-11-28 11:06:48.227988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.228002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.228018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.228031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.228045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.228059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.228075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.228089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.228105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.228118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.228153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.228166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.228180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.228195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.228209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.228224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.150 [2024-11-28 11:06:48.228238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.150 [2024-11-28 11:06:48.228251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.151 [2024-11-28 11:06:48.228892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.228984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.228996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.229009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.229021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.229036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182f00 00:34:20.151 [2024-11-28 11:06:48.229048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.231040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:20.151 [2024-11-28 11:06:48.231061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:20.151 [2024-11-28 11:06:48.231075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52128 len:8 PRP1 0x0 PRP2 0x0 00:34:20.151 [2024-11-28 11:06:48.231089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:48.231289] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:34:20.151 [2024-11-28 11:06:48.231306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:34:20.151 [2024-11-28 11:06:48.234397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:34:20.151 [2024-11-28 11:06:48.262500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:34:20.151 [2024-11-28 11:06:48.305205] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:34:20.151 9867.50 IOPS, 38.54 MiB/s [2024-11-28T10:06:59.032Z] 10704.14 IOPS, 41.81 MiB/s [2024-11-28T10:06:59.032Z] 11318.38 IOPS, 44.21 MiB/s [2024-11-28T10:06:59.032Z] 11767.33 IOPS, 45.97 MiB/s [2024-11-28T10:06:59.032Z] [2024-11-28 11:06:52.643096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182700 00:34:20.151 [2024-11-28 11:06:52.643159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:52.643193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182700 00:34:20.151 [2024-11-28 11:06:52.643208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:52.643224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182700 00:34:20.151 [2024-11-28 11:06:52.643237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:52.643252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182700 00:34:20.151 [2024-11-28 11:06:52.643265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.151 [2024-11-28 11:06:52.643280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182700 00:34:20.152 [2024-11-28 11:06:52.643293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182700 00:34:20.152 [2024-11-28 11:06:52.643320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182700 00:34:20.152 [2024-11-28 11:06:52.643564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x182700 00:34:20.152 [2024-11-28 11:06:52.643591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182700 00:34:20.152 [2024-11-28 11:06:52.643628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x182700 00:34:20.152 [2024-11-28 11:06:52.643655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x182700 00:34:20.152 [2024-11-28 11:06:52.643682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x182700 00:34:20.152 [2024-11-28 11:06:52.643712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x182700 00:34:20.152 [2024-11-28 11:06:52.643740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182700 00:34:20.152 [2024-11-28 11:06:52.643766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.643985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.643996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.152 [2024-11-28 11:06:52.644306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.152 [2024-11-28 11:06:52.644320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.644624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.644650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.644679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.644707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.644733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.644760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.644787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.644812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.644981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.644995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.645008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.153 [2024-11-28 11:06:52.645020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.645035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.645047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.645061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.645073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.645087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.645099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.645114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.645126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.645139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.645151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.645165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.645179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.153 [2024-11-28 11:06:52.645193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182700 00:34:20.153 [2024-11-28 11:06:52.645205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.645651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.645977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.645991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.646003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.646018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.646030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.646044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.646056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.646070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182700 00:34:20.154 [2024-11-28 11:06:52.646082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.646096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.646107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.646121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.646133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.646151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.646163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.646178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.646191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.646205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.646217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.154 [2024-11-28 11:06:52.646231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.154 [2024-11-28 11:06:52.646243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.155 [2024-11-28 11:06:52.646269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.155 [2024-11-28 11:06:52.646297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x182700 00:34:20.155 [2024-11-28 11:06:52.646324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182700 00:34:20.155 [2024-11-28 11:06:52.646350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182700 00:34:20.155 [2024-11-28 11:06:52.646376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182700 00:34:20.155 [2024-11-28 11:06:52.646404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x182700 00:34:20.155 [2024-11-28 11:06:52.646431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x182700 00:34:20.155 [2024-11-28 11:06:52.646457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x182700 00:34:20.155 [2024-11-28 11:06:52.646484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x182700 00:34:20.155 [2024-11-28 11:06:52.646510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.646523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.155 [2024-11-28 11:06:52.646535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.648542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:20.155 [2024-11-28 11:06:52.648562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:20.155 [2024-11-28 11:06:52.648576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87792 len:8 PRP1 0x0 PRP2 0x0 00:34:20.155 [2024-11-28 11:06:52.648590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.155 [2024-11-28 11:06:52.648790] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:34:20.155 [2024-11-28 11:06:52.648808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:34:20.155 [2024-11-28 11:06:52.651904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:34:20.155 [2024-11-28 11:06:52.679685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:34:20.155 10590.60 IOPS, 41.37 MiB/s [2024-11-28T10:06:59.036Z] [2024-11-28 11:06:52.718045] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:34:20.155 10998.91 IOPS, 42.96 MiB/s [2024-11-28T10:06:59.036Z] 11394.42 IOPS, 44.51 MiB/s [2024-11-28T10:06:59.036Z] 11726.92 IOPS, 45.81 MiB/s [2024-11-28T10:06:59.036Z] 12013.86 IOPS, 46.93 MiB/s 00:34:20.155 Latency(us) 00:34:20.155 [2024-11-28T10:06:59.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.155 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:20.155 Verification LBA range: start 0x0 length 0x4000 00:34:20.155 NVMe0n1 : 15.01 12242.45 47.82 300.34 0.00 10179.46 511.18 1053609.16 00:34:20.155 [2024-11-28T10:06:59.036Z] =================================================================================================================== 00:34:20.155 [2024-11-28T10:06:59.036Z] Total : 12242.45 47.82 300.34 0.00 10179.46 511.18 1053609.16 00:34:20.155 Received shutdown signal, test time was about 15.000000 seconds 00:34:20.155 00:34:20.155 Latency(us) 00:34:20.155 [2024-11-28T10:06:59.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.155 [2024-11-28T10:06:59.036Z] =================================================================================================================== 00:34:20.155 [2024-11-28T10:06:59.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1886291 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1886291 /var/tmp/bdevperf.sock 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1886291 ']' 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:20.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:20.155 11:06:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:21.094 11:06:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:21.094 11:06:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:21.094 11:06:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:21.094 [2024-11-28 11:06:59.849663] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:21.094 11:06:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:21.353 [2024-11-28 11:07:00.054434] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:34:21.353 11:07:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:21.612 NVMe0n1 00:34:21.612 11:07:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:21.871 00:34:21.871 11:07:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:22.131 00:34:22.131 11:07:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:22.131 11:07:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:22.390 11:07:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:22.649 11:07:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:25.943 11:07:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:25.943 11:07:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:25.943 11:07:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1887115 00:34:25.943 11:07:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:25.943 11:07:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1887115 00:34:26.880 { 00:34:26.880 "results": [ 00:34:26.880 { 00:34:26.880 "job": "NVMe0n1", 00:34:26.880 "core_mask": "0x1", 00:34:26.880 "workload": "verify", 00:34:26.880 "status": "finished", 00:34:26.880 "verify_range": { 00:34:26.880 "start": 0, 00:34:26.880 "length": 16384 00:34:26.880 }, 00:34:26.880 "queue_depth": 128, 00:34:26.880 "io_size": 4096, 00:34:26.880 "runtime": 1.012793, 00:34:26.880 "iops": 15418.747957381222, 00:34:26.880 "mibps": 60.2294842085204, 00:34:26.880 "io_failed": 0, 00:34:26.880 "io_timeout": 0, 00:34:26.880 "avg_latency_us": 8254.735947540983, 00:34:26.880 "min_latency_us": 3119.5136, 00:34:26.880 "max_latency_us": 18454.9376 00:34:26.880 } 00:34:26.880 ], 00:34:26.880 "core_count": 1 00:34:26.880 } 00:34:26.880 11:07:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:26.880 [2024-11-28 11:06:58.870537] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:34:26.880 [2024-11-28 11:06:58.870641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886291 ] 00:34:26.880 [2024-11-28 11:06:59.022891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.880 [2024-11-28 11:06:59.126325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.880 [2024-11-28 11:07:01.243668] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:34:26.880 [2024-11-28 11:07:01.244324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:34:26.880 [2024-11-28 11:07:01.244389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:34:26.880 [2024-11-28 11:07:01.277987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:34:26.880 [2024-11-28 11:07:01.291669] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:34:26.880 Running I/O for 1 seconds... 00:34:26.880 15368.00 IOPS, 60.03 MiB/s 00:34:26.880 Latency(us) 00:34:26.880 [2024-11-28T10:07:05.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:26.880 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:26.880 Verification LBA range: start 0x0 length 0x4000 00:34:26.880 NVMe0n1 : 1.01 15418.75 60.23 0.00 0.00 8254.74 3119.51 18454.94 00:34:26.880 [2024-11-28T10:07:05.761Z] =================================================================================================================== 00:34:26.880 [2024-11-28T10:07:05.761Z] Total : 15418.75 60.23 0.00 0.00 8254.74 3119.51 18454.94 00:34:26.880 11:07:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:26.880 11:07:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:27.139 11:07:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:27.139 11:07:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:27.139 11:07:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:27.398 11:07:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:27.657 11:07:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1886291 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1886291 ']' 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1886291 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1886291 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1886291' 00:34:30.945 killing process with pid 1886291 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1886291 00:34:30.945 11:07:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1886291 00:34:31.881 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:31.881 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:31.881 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:31.881 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:32.139 rmmod nvme_rdma 00:34:32.139 rmmod nvme_fabrics 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1882810 ']' 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1882810 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1882810 ']' 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1882810 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1882810 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1882810' 00:34:32.139 killing process with pid 1882810 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1882810 00:34:32.139 11:07:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1882810 00:34:34.041 11:07:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:34.041 11:07:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:34.041 00:34:34.041 real 0m42.590s 00:34:34.041 user 2m15.677s 00:34:34.041 sys 0m9.278s 00:34:34.041 11:07:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.041 11:07:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:34.041 ************************************ 00:34:34.041 END TEST nvmf_failover 00:34:34.041 ************************************ 00:34:34.041 11:07:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:34:34.041 11:07:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:34.041 11:07:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.041 11:07:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.041 ************************************ 00:34:34.041 START TEST nvmf_host_discovery 00:34:34.041 ************************************ 00:34:34.041 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:34:34.041 * Looking for test storage... 00:34:34.041 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:34.041 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:34.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.042 --rc genhtml_branch_coverage=1 00:34:34.042 --rc genhtml_function_coverage=1 00:34:34.042 --rc genhtml_legend=1 00:34:34.042 --rc geninfo_all_blocks=1 00:34:34.042 --rc geninfo_unexecuted_blocks=1 00:34:34.042 00:34:34.042 ' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:34.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.042 --rc genhtml_branch_coverage=1 00:34:34.042 --rc genhtml_function_coverage=1 00:34:34.042 --rc genhtml_legend=1 00:34:34.042 --rc geninfo_all_blocks=1 00:34:34.042 --rc geninfo_unexecuted_blocks=1 00:34:34.042 00:34:34.042 ' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:34.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.042 --rc genhtml_branch_coverage=1 00:34:34.042 --rc genhtml_function_coverage=1 00:34:34.042 --rc genhtml_legend=1 00:34:34.042 --rc geninfo_all_blocks=1 00:34:34.042 --rc geninfo_unexecuted_blocks=1 00:34:34.042 00:34:34.042 ' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:34.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.042 --rc genhtml_branch_coverage=1 00:34:34.042 --rc genhtml_function_coverage=1 00:34:34.042 --rc genhtml_legend=1 00:34:34.042 --rc geninfo_all_blocks=1 00:34:34.042 --rc geninfo_unexecuted_blocks=1 00:34:34.042 00:34:34.042 ' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:34.042 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.042 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:34:34.043 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:34:34.043 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:34:34.043 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:34:34.043 00:34:34.043 real 0m0.219s 00:34:34.043 user 0m0.126s 00:34:34.043 sys 0m0.107s 00:34:34.043 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.043 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.043 ************************************ 00:34:34.043 END TEST nvmf_host_discovery 00:34:34.043 ************************************ 00:34:34.043 11:07:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:34:34.043 11:07:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:34.043 11:07:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.043 11:07:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.301 ************************************ 00:34:34.301 START TEST nvmf_host_multipath_status 00:34:34.301 ************************************ 00:34:34.301 11:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:34:34.301 * Looking for test storage... 00:34:34.301 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:34.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.301 --rc genhtml_branch_coverage=1 00:34:34.301 --rc genhtml_function_coverage=1 00:34:34.301 --rc genhtml_legend=1 00:34:34.301 --rc geninfo_all_blocks=1 00:34:34.301 --rc geninfo_unexecuted_blocks=1 00:34:34.301 00:34:34.301 ' 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:34.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.301 --rc genhtml_branch_coverage=1 00:34:34.301 --rc genhtml_function_coverage=1 00:34:34.301 --rc genhtml_legend=1 00:34:34.301 --rc geninfo_all_blocks=1 00:34:34.301 --rc geninfo_unexecuted_blocks=1 00:34:34.301 00:34:34.301 ' 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:34.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.301 --rc genhtml_branch_coverage=1 00:34:34.301 --rc genhtml_function_coverage=1 00:34:34.301 --rc genhtml_legend=1 00:34:34.301 --rc geninfo_all_blocks=1 00:34:34.301 --rc geninfo_unexecuted_blocks=1 00:34:34.301 00:34:34.301 ' 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:34.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.301 --rc genhtml_branch_coverage=1 00:34:34.301 --rc genhtml_function_coverage=1 00:34:34.301 --rc genhtml_legend=1 00:34:34.301 --rc geninfo_all_blocks=1 00:34:34.301 --rc geninfo_unexecuted_blocks=1 00:34:34.301 00:34:34.301 ' 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:34.301 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:34.302 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:34.302 11:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:42.426 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:42.426 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:42.426 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:42.427 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:42.427 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:42.427 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:42.687 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:42.688 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:42.688 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:42.688 altname enp217s0f0np0 00:34:42.688 altname ens818f0np0 00:34:42.688 inet 192.168.100.8/24 scope global mlx_0_0 00:34:42.688 valid_lft forever preferred_lft forever 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:42.688 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:42.688 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:42.688 altname enp217s0f1np1 00:34:42.688 altname ens818f1np1 00:34:42.688 inet 192.168.100.9/24 scope global mlx_0_1 00:34:42.688 valid_lft forever preferred_lft forever 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:42.688 192.168.100.9' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:42.688 192.168.100.9' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:42.688 192.168.100.9' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1892666 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1892666 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1892666 ']' 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:42.688 11:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.948 [2024-11-28 11:07:21.650208] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:34:42.948 [2024-11-28 11:07:21.650307] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:42.948 [2024-11-28 11:07:21.824552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:43.206 [2024-11-28 11:07:21.927647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.206 [2024-11-28 11:07:21.927690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.206 [2024-11-28 11:07:21.927702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.206 [2024-11-28 11:07:21.927732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.207 [2024-11-28 11:07:21.927742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.207 [2024-11-28 11:07:21.929705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.207 [2024-11-28 11:07:21.929706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.774 11:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:43.774 11:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:43.774 11:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:43.774 11:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:43.774 11:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:43.774 11:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.774 11:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1892666 00:34:43.774 11:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:44.033 [2024-11-28 11:07:22.669689] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f66bb1a4940) succeed. 00:34:44.033 [2024-11-28 11:07:22.679154] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f66bb160940) succeed. 00:34:44.033 11:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:44.292 Malloc0 00:34:44.292 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:44.552 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:44.811 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:44.811 [2024-11-28 11:07:23.608093] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:44.811 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:45.071 [2024-11-28 11:07:23.792398] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:45.071 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1892979 00:34:45.071 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:45.071 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:45.071 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1892979 /var/tmp/bdevperf.sock 00:34:45.071 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1892979 ']' 00:34:45.071 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:45.071 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:45.071 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:45.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:45.071 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:45.071 11:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:46.009 11:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.009 11:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:46.009 11:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:46.009 11:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:46.577 Nvme0n1 00:34:46.577 11:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:46.577 Nvme0n1 00:34:46.577 11:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:46.577 11:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:49.116 11:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:49.116 11:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:34:49.116 11:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:49.116 11:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:50.054 11:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:50.054 11:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:50.054 11:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.054 11:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:50.313 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.313 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:50.313 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.313 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:50.573 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:50.573 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:50.573 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.573 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:50.573 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.573 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:50.573 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.573 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:50.831 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.831 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:50.831 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.831 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:51.090 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.090 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:51.090 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.090 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:51.349 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.349 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:51.349 11:07:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:51.349 11:07:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:51.609 11:07:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:52.546 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:52.546 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:52.546 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.546 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:52.806 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:52.806 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:52.806 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.806 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:53.065 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.065 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:53.065 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.065 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:53.324 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.325 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:53.325 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.325 11:07:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:53.325 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.325 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:53.325 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.325 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:53.584 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.584 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:53.584 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.584 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:53.843 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.843 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:53.843 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:53.843 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:34:54.102 11:07:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:55.039 11:07:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:55.039 11:07:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:55.299 11:07:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.299 11:07:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:55.299 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.299 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:55.299 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.299 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:55.559 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.559 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:55.559 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.559 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:55.818 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.818 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:55.818 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.818 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:56.077 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.077 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:56.077 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.077 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:56.077 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.077 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:56.077 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.077 11:07:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:56.337 11:07:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.337 11:07:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:56.337 11:07:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:56.596 11:07:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:34:56.596 11:07:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:57.976 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.977 11:07:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:58.237 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.237 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:58.237 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.237 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:58.497 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.497 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:58.497 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.497 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:58.757 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.757 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:58.757 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.757 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:58.757 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.757 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:58.757 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:34:59.016 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:34:59.276 11:07:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:00.211 11:07:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:00.211 11:07:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:00.211 11:07:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.211 11:07:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:00.470 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.470 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:00.470 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:00.470 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.729 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.730 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:00.730 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.730 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:00.730 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.730 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:00.730 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.730 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:00.989 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.989 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:00.989 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.989 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:01.248 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:01.248 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:01.248 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.248 11:07:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:01.507 11:07:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:01.507 11:07:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:01.507 11:07:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:35:01.507 11:07:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:01.766 11:07:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:02.705 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:02.705 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:02.705 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.705 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:02.965 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.965 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:02.965 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.965 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:03.224 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.224 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:03.224 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.224 11:07:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:03.484 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.484 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:03.484 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.484 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:03.484 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.484 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:03.484 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.484 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:03.744 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:03.744 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:03.744 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.744 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:04.004 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.004 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:04.263 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:04.263 11:07:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:35:04.263 11:07:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:04.522 11:07:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:05.460 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:05.460 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:05.460 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.460 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:05.722 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.722 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:05.722 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.722 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:06.003 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.003 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:06.003 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.003 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:06.003 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.003 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:06.003 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.003 11:07:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:06.296 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.296 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:06.296 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.296 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:06.591 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.591 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:06.591 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.591 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:06.591 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.591 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:06.591 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:06.867 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:07.125 11:07:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:08.061 11:07:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:08.061 11:07:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:08.061 11:07:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.061 11:07:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:08.320 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.320 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:08.320 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.320 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:08.579 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.579 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:08.579 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.579 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:08.579 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.579 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:08.579 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.579 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:08.838 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.838 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:08.838 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.838 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:09.098 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.098 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:09.098 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:09.098 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.357 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.357 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:09.357 11:07:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:09.357 11:07:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:35:09.615 11:07:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:10.552 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:10.552 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:10.552 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.552 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:10.811 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.811 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:10.811 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.811 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:11.071 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.071 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:11.071 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.071 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:11.071 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.071 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:11.330 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.330 11:07:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:11.330 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.330 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:11.330 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.330 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:11.589 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.589 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:11.589 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.589 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:11.848 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.848 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:11.848 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:12.107 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:35:12.107 11:07:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:13.487 11:07:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:13.487 11:07:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:13.487 11:07:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.487 11:07:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:13.487 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.487 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:13.487 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.487 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:13.487 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:13.487 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:13.487 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.487 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:13.747 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.747 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:13.747 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.747 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:14.006 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.006 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:14.006 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.006 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:14.265 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.265 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:14.265 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.265 11:07:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1892979 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1892979 ']' 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1892979 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1892979 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1892979' 00:35:14.265 killing process with pid 1892979 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1892979 00:35:14.265 11:07:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1892979 00:35:14.524 { 00:35:14.524 "results": [ 00:35:14.524 { 00:35:14.524 "job": "Nvme0n1", 00:35:14.524 "core_mask": "0x4", 00:35:14.524 "workload": "verify", 00:35:14.524 "status": "terminated", 00:35:14.524 "verify_range": { 00:35:14.524 "start": 0, 00:35:14.524 "length": 16384 00:35:14.524 }, 00:35:14.524 "queue_depth": 128, 00:35:14.524 "io_size": 4096, 00:35:14.524 "runtime": 27.583633, 00:35:14.524 "iops": 14001.708911947893, 00:35:14.524 "mibps": 54.69417543729646, 00:35:14.524 "io_failed": 0, 00:35:14.524 "io_timeout": 0, 00:35:14.524 "avg_latency_us": 9119.513572028232, 00:35:14.524 "min_latency_us": 1677.7216, 00:35:14.524 "max_latency_us": 3019898.88 00:35:14.524 } 00:35:14.524 ], 00:35:14.525 "core_count": 1 00:35:14.525 } 00:35:15.467 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1892979 00:35:15.467 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:15.467 [2024-11-28 11:07:23.889107] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:35:15.467 [2024-11-28 11:07:23.889205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892979 ] 00:35:15.467 [2024-11-28 11:07:24.037945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.467 [2024-11-28 11:07:24.139751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:15.467 Running I/O for 90 seconds... 00:35:15.467 16167.00 IOPS, 63.15 MiB/s [2024-11-28T10:07:54.348Z] 16256.00 IOPS, 63.50 MiB/s [2024-11-28T10:07:54.348Z] 16298.67 IOPS, 63.67 MiB/s [2024-11-28T10:07:54.348Z] 16288.00 IOPS, 63.62 MiB/s [2024-11-28T10:07:54.348Z] 16277.40 IOPS, 63.58 MiB/s [2024-11-28T10:07:54.348Z] 16298.67 IOPS, 63.67 MiB/s [2024-11-28T10:07:54.348Z] 16292.57 IOPS, 63.64 MiB/s [2024-11-28T10:07:54.348Z] 16285.88 IOPS, 63.62 MiB/s [2024-11-28T10:07:54.348Z] 16270.22 IOPS, 63.56 MiB/s [2024-11-28T10:07:54.348Z] 16256.00 IOPS, 63.50 MiB/s [2024-11-28T10:07:54.348Z] 16267.64 IOPS, 63.55 MiB/s [2024-11-28T10:07:54.348Z] 16259.25 IOPS, 63.51 MiB/s [2024-11-28T10:07:54.348Z] [2024-11-28 11:07:37.772470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x185200 00:35:15.467 [2024-11-28 11:07:37.772531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:15.467 [2024-11-28 11:07:37.772601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x185200 00:35:15.467 [2024-11-28 11:07:37.772627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.467 [2024-11-28 11:07:37.772645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x185200 00:35:15.467 [2024-11-28 11:07:37.772660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.467 [2024-11-28 11:07:37.772677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x185200 00:35:15.467 [2024-11-28 11:07:37.772692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:15.467 [2024-11-28 11:07:37.772708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x185200 00:35:15.467 [2024-11-28 11:07:37.772726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:15.467 [2024-11-28 11:07:37.772743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x185200 00:35:15.467 [2024-11-28 11:07:37.772758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:15.467 [2024-11-28 11:07:37.772774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x185200 00:35:15.467 [2024-11-28 11:07:37.772789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.772806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.772820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.772836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.772857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.772873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.772888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.772904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.772919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.772934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.772949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.772965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.772982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.772998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.468 [2024-11-28 11:07:37.773657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x185200 00:35:15.468 [2024-11-28 11:07:37.773671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.773701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.773734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.773764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.773794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.773825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.773856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.773886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.773918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.773948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.773982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.773998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x185200 00:35:15.469 [2024-11-28 11:07:37.774594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.469 [2024-11-28 11:07:37.774617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.774976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.774992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.775006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.470 [2024-11-28 11:07:37.775036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.470 [2024-11-28 11:07:37.775066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.470 [2024-11-28 11:07:37.775098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.470 [2024-11-28 11:07:37.775130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.470 [2024-11-28 11:07:37.775160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.470 [2024-11-28 11:07:37.775190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.470 [2024-11-28 11:07:37.775221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.470 [2024-11-28 11:07:37.775251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.470 [2024-11-28 11:07:37.775280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.470 [2024-11-28 11:07:37.775310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.775342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.775372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.775403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.775434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.775467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:15.470 [2024-11-28 11:07:37.775482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x185200 00:35:15.470 [2024-11-28 11:07:37.775498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.775982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.775996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x185200 00:35:15.471 [2024-11-28 11:07:37.776363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:15.471 [2024-11-28 11:07:37.776381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x185200 00:35:15.472 [2024-11-28 11:07:37.776395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:37.776411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:37.776425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:37.776441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:37.776457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:37.776472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:37.776486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:37.776780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:37.776802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:37.776828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:37.776843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:15.472 15302.08 IOPS, 59.77 MiB/s [2024-11-28T10:07:54.353Z] 14209.07 IOPS, 55.50 MiB/s [2024-11-28T10:07:54.353Z] 13261.80 IOPS, 51.80 MiB/s [2024-11-28T10:07:54.353Z] 13210.69 IOPS, 51.60 MiB/s [2024-11-28T10:07:54.353Z] 13389.12 IOPS, 52.30 MiB/s [2024-11-28T10:07:54.353Z] 13482.78 IOPS, 52.67 MiB/s [2024-11-28T10:07:54.353Z] 13493.05 IOPS, 52.71 MiB/s [2024-11-28T10:07:54.353Z] 13496.20 IOPS, 52.72 MiB/s [2024-11-28T10:07:54.353Z] 13625.95 IOPS, 53.23 MiB/s [2024-11-28T10:07:54.353Z] 13745.05 IOPS, 53.69 MiB/s [2024-11-28T10:07:54.353Z] 13836.35 IOPS, 54.05 MiB/s [2024-11-28T10:07:54.353Z] 13826.62 IOPS, 54.01 MiB/s [2024-11-28T10:07:54.353Z] 13811.60 IOPS, 53.95 MiB/s [2024-11-28T10:07:54.353Z] [2024-11-28 11:07:50.907196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.907258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.907310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x185200 00:35:15.472 [2024-11-28 11:07:50.907328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.907345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x185200 00:35:15.472 [2024-11-28 11:07:50.907360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.907860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.907884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.907905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.907927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.907943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x185200 00:35:15.472 [2024-11-28 11:07:50.907958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.907974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.907988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.908021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x185200 00:35:15.472 [2024-11-28 11:07:50.908051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x185200 00:35:15.472 [2024-11-28 11:07:50.908081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x185200 00:35:15.472 [2024-11-28 11:07:50.908111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.908141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.908173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.908203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.908233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.908270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x185200 00:35:15.472 [2024-11-28 11:07:50.908303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.908332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.908362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x185200 00:35:15.472 [2024-11-28 11:07:50.908394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.472 [2024-11-28 11:07:50.908431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x185200 00:35:15.472 [2024-11-28 11:07:50.908461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:15.472 [2024-11-28 11:07:50.908476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.473 [2024-11-28 11:07:50.908490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.473 [2024-11-28 11:07:50.908520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.908549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.473 [2024-11-28 11:07:50.908579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.908613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.473 [2024-11-28 11:07:50.908647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.908681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.908710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.908740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.908772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.473 [2024-11-28 11:07:50.908801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.473 [2024-11-28 11:07:50.908830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.908860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.473 [2024-11-28 11:07:50.908931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.908964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.908979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.908993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.909009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.909023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.909039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.473 [2024-11-28 11:07:50.909054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.909071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.909086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.909101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.473 [2024-11-28 11:07:50.909116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.909131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.909145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.909160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.473 [2024-11-28 11:07:50.909174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:15.473 [2024-11-28 11:07:50.909190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x185200 00:35:15.473 [2024-11-28 11:07:50.909208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.474 [2024-11-28 11:07:50.909267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.474 [2024-11-28 11:07:50.909356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.474 [2024-11-28 11:07:50.909385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.474 [2024-11-28 11:07:50.909450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.474 [2024-11-28 11:07:50.909638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x185200 00:35:15.474 [2024-11-28 11:07:50.909700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.474 [2024-11-28 11:07:50.909729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.474 [2024-11-28 11:07:50.909759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:15.474 [2024-11-28 11:07:50.909775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.474 [2024-11-28 11:07:50.909789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:15.474 13869.73 IOPS, 54.18 MiB/s [2024-11-28T10:07:54.355Z] 13957.63 IOPS, 54.52 MiB/s [2024-11-28T10:07:54.355Z] Received shutdown signal, test time was about 27.584305 seconds 00:35:15.474 00:35:15.474 Latency(us) 00:35:15.474 [2024-11-28T10:07:54.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.474 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:15.474 Verification LBA range: start 0x0 length 0x4000 00:35:15.474 Nvme0n1 : 27.58 14001.71 54.69 0.00 0.00 9119.51 1677.72 3019898.88 00:35:15.474 [2024-11-28T10:07:54.355Z] =================================================================================================================== 00:35:15.474 [2024-11-28T10:07:54.355Z] Total : 14001.71 54.69 0.00 0.00 9119.51 1677.72 3019898.88 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:15.474 rmmod nvme_rdma 00:35:15.474 rmmod nvme_fabrics 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1892666 ']' 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1892666 00:35:15.474 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1892666 ']' 00:35:15.475 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1892666 00:35:15.475 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:15.475 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.475 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1892666 00:35:15.734 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:15.734 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:15.734 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1892666' 00:35:15.734 killing process with pid 1892666 00:35:15.734 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1892666 00:35:15.734 11:07:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1892666 00:35:17.115 11:07:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:17.115 11:07:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:35:17.115 00:35:17.115 real 0m42.966s 00:35:17.115 user 1m55.407s 00:35:17.115 sys 0m10.758s 00:35:17.115 11:07:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.115 11:07:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:17.115 ************************************ 00:35:17.115 END TEST nvmf_host_multipath_status 00:35:17.115 ************************************ 00:35:17.115 11:07:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:35:17.115 11:07:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:17.115 11:07:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.115 11:07:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.375 ************************************ 00:35:17.375 START TEST nvmf_discovery_remove_ifc 00:35:17.376 ************************************ 00:35:17.376 11:07:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:35:17.376 * Looking for test storage... 00:35:17.376 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:17.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.376 --rc genhtml_branch_coverage=1 00:35:17.376 --rc genhtml_function_coverage=1 00:35:17.376 --rc genhtml_legend=1 00:35:17.376 --rc geninfo_all_blocks=1 00:35:17.376 --rc geninfo_unexecuted_blocks=1 00:35:17.376 00:35:17.376 ' 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:17.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.376 --rc genhtml_branch_coverage=1 00:35:17.376 --rc genhtml_function_coverage=1 00:35:17.376 --rc genhtml_legend=1 00:35:17.376 --rc geninfo_all_blocks=1 00:35:17.376 --rc geninfo_unexecuted_blocks=1 00:35:17.376 00:35:17.376 ' 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:17.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.376 --rc genhtml_branch_coverage=1 00:35:17.376 --rc genhtml_function_coverage=1 00:35:17.376 --rc genhtml_legend=1 00:35:17.376 --rc geninfo_all_blocks=1 00:35:17.376 --rc geninfo_unexecuted_blocks=1 00:35:17.376 00:35:17.376 ' 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:17.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.376 --rc genhtml_branch_coverage=1 00:35:17.376 --rc genhtml_function_coverage=1 00:35:17.376 --rc genhtml_legend=1 00:35:17.376 --rc geninfo_all_blocks=1 00:35:17.376 --rc geninfo_unexecuted_blocks=1 00:35:17.376 00:35:17.376 ' 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.376 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.377 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:35:17.377 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:35:17.377 00:35:17.377 real 0m0.217s 00:35:17.377 user 0m0.113s 00:35:17.377 sys 0m0.119s 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.377 11:07:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.377 ************************************ 00:35:17.377 END TEST nvmf_discovery_remove_ifc 00:35:17.377 ************************************ 00:35:17.637 11:07:56 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:35:17.637 11:07:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:17.637 11:07:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.637 11:07:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.638 ************************************ 00:35:17.638 START TEST nvmf_identify_kernel_target 00:35:17.638 ************************************ 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:35:17.638 * Looking for test storage... 00:35:17.638 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:17.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.638 --rc genhtml_branch_coverage=1 00:35:17.638 --rc genhtml_function_coverage=1 00:35:17.638 --rc genhtml_legend=1 00:35:17.638 --rc geninfo_all_blocks=1 00:35:17.638 --rc geninfo_unexecuted_blocks=1 00:35:17.638 00:35:17.638 ' 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:17.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.638 --rc genhtml_branch_coverage=1 00:35:17.638 --rc genhtml_function_coverage=1 00:35:17.638 --rc genhtml_legend=1 00:35:17.638 --rc geninfo_all_blocks=1 00:35:17.638 --rc geninfo_unexecuted_blocks=1 00:35:17.638 00:35:17.638 ' 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:17.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.638 --rc genhtml_branch_coverage=1 00:35:17.638 --rc genhtml_function_coverage=1 00:35:17.638 --rc genhtml_legend=1 00:35:17.638 --rc geninfo_all_blocks=1 00:35:17.638 --rc geninfo_unexecuted_blocks=1 00:35:17.638 00:35:17.638 ' 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:17.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.638 --rc genhtml_branch_coverage=1 00:35:17.638 --rc genhtml_function_coverage=1 00:35:17.638 --rc genhtml_legend=1 00:35:17.638 --rc geninfo_all_blocks=1 00:35:17.638 --rc geninfo_unexecuted_blocks=1 00:35:17.638 00:35:17.638 ' 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.638 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:17.639 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.639 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:17.639 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.639 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.639 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.639 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.639 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.639 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.639 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.639 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.639 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:17.898 11:07:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:35:26.020 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:35:26.020 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:35:26.020 Found net devices under 0000:d9:00.0: mlx_0_0 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:35:26.020 Found net devices under 0000:d9:00.1: mlx_0_1 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:35:26.020 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:35:26.021 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:26.021 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:35:26.021 altname enp217s0f0np0 00:35:26.021 altname ens818f0np0 00:35:26.021 inet 192.168.100.8/24 scope global mlx_0_0 00:35:26.021 valid_lft forever preferred_lft forever 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:35:26.021 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:26.021 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:35:26.021 altname enp217s0f1np1 00:35:26.021 altname ens818f1np1 00:35:26.021 inet 192.168.100.9/24 scope global mlx_0_1 00:35:26.021 valid_lft forever preferred_lft forever 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:35:26.021 192.168.100.9' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:35:26.021 192.168.100.9' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:35:26.021 192.168.100.9' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:26.021 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:26.282 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:26.282 11:08:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:35:29.575 Waiting for block devices as requested 00:35:29.575 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:29.575 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:29.575 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:29.575 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:29.835 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:29.835 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:29.835 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:30.094 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:30.095 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:30.095 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:30.354 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:30.354 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:30.355 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:30.614 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:30.614 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:30.615 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:30.874 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:30.874 No valid GPT data, bailing 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:30.874 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:35:31.144 00:35:31.144 Discovery Log Number of Records 2, Generation counter 2 00:35:31.144 =====Discovery Log Entry 0====== 00:35:31.144 trtype: rdma 00:35:31.144 adrfam: ipv4 00:35:31.144 subtype: current discovery subsystem 00:35:31.144 treq: not specified, sq flow control disable supported 00:35:31.144 portid: 1 00:35:31.144 trsvcid: 4420 00:35:31.144 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:31.144 traddr: 192.168.100.8 00:35:31.144 eflags: none 00:35:31.144 rdma_prtype: not specified 00:35:31.144 rdma_qptype: connected 00:35:31.144 rdma_cms: rdma-cm 00:35:31.144 rdma_pkey: 0x0000 00:35:31.144 =====Discovery Log Entry 1====== 00:35:31.144 trtype: rdma 00:35:31.144 adrfam: ipv4 00:35:31.144 subtype: nvme subsystem 00:35:31.144 treq: not specified, sq flow control disable supported 00:35:31.144 portid: 1 00:35:31.144 trsvcid: 4420 00:35:31.144 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:31.144 traddr: 192.168.100.8 00:35:31.144 eflags: none 00:35:31.144 rdma_prtype: not specified 00:35:31.144 rdma_qptype: connected 00:35:31.144 rdma_cms: rdma-cm 00:35:31.144 rdma_pkey: 0x0000 00:35:31.144 11:08:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:35:31.144 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:31.407 ===================================================== 00:35:31.407 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:31.407 ===================================================== 00:35:31.407 Controller Capabilities/Features 00:35:31.407 ================================ 00:35:31.407 Vendor ID: 0000 00:35:31.407 Subsystem Vendor ID: 0000 00:35:31.407 Serial Number: 2d225256114b106a090e 00:35:31.407 Model Number: Linux 00:35:31.407 Firmware Version: 6.8.9-20 00:35:31.407 Recommended Arb Burst: 0 00:35:31.407 IEEE OUI Identifier: 00 00 00 00:35:31.407 Multi-path I/O 00:35:31.407 May have multiple subsystem ports: No 00:35:31.407 May have multiple controllers: No 00:35:31.408 Associated with SR-IOV VF: No 00:35:31.408 Max Data Transfer Size: Unlimited 00:35:31.408 Max Number of Namespaces: 0 00:35:31.408 Max Number of I/O Queues: 1024 00:35:31.408 NVMe Specification Version (VS): 1.3 00:35:31.408 NVMe Specification Version (Identify): 1.3 00:35:31.408 Maximum Queue Entries: 128 00:35:31.408 Contiguous Queues Required: No 00:35:31.408 Arbitration Mechanisms Supported 00:35:31.408 Weighted Round Robin: Not Supported 00:35:31.408 Vendor Specific: Not Supported 00:35:31.408 Reset Timeout: 7500 ms 00:35:31.408 Doorbell Stride: 4 bytes 00:35:31.408 NVM Subsystem Reset: Not Supported 00:35:31.408 Command Sets Supported 00:35:31.408 NVM Command Set: Supported 00:35:31.408 Boot Partition: Not Supported 00:35:31.408 Memory Page Size Minimum: 4096 bytes 00:35:31.408 Memory Page Size Maximum: 4096 bytes 00:35:31.408 Persistent Memory Region: Not Supported 00:35:31.408 Optional Asynchronous Events Supported 00:35:31.408 Namespace Attribute Notices: Not Supported 00:35:31.408 Firmware Activation Notices: Not Supported 00:35:31.408 ANA Change Notices: Not Supported 00:35:31.408 PLE Aggregate Log Change Notices: Not Supported 00:35:31.408 LBA Status Info Alert Notices: Not Supported 00:35:31.408 EGE Aggregate Log Change Notices: Not Supported 00:35:31.408 Normal NVM Subsystem Shutdown event: Not Supported 00:35:31.408 Zone Descriptor Change Notices: Not Supported 00:35:31.408 Discovery Log Change Notices: Supported 00:35:31.408 Controller Attributes 00:35:31.408 128-bit Host Identifier: Not Supported 00:35:31.408 Non-Operational Permissive Mode: Not Supported 00:35:31.408 NVM Sets: Not Supported 00:35:31.408 Read Recovery Levels: Not Supported 00:35:31.408 Endurance Groups: Not Supported 00:35:31.408 Predictable Latency Mode: Not Supported 00:35:31.408 Traffic Based Keep ALive: Not Supported 00:35:31.408 Namespace Granularity: Not Supported 00:35:31.408 SQ Associations: Not Supported 00:35:31.408 UUID List: Not Supported 00:35:31.408 Multi-Domain Subsystem: Not Supported 00:35:31.408 Fixed Capacity Management: Not Supported 00:35:31.408 Variable Capacity Management: Not Supported 00:35:31.408 Delete Endurance Group: Not Supported 00:35:31.408 Delete NVM Set: Not Supported 00:35:31.408 Extended LBA Formats Supported: Not Supported 00:35:31.408 Flexible Data Placement Supported: Not Supported 00:35:31.408 00:35:31.408 Controller Memory Buffer Support 00:35:31.408 ================================ 00:35:31.408 Supported: No 00:35:31.408 00:35:31.408 Persistent Memory Region Support 00:35:31.408 ================================ 00:35:31.408 Supported: No 00:35:31.408 00:35:31.408 Admin Command Set Attributes 00:35:31.408 ============================ 00:35:31.408 Security Send/Receive: Not Supported 00:35:31.408 Format NVM: Not Supported 00:35:31.408 Firmware Activate/Download: Not Supported 00:35:31.408 Namespace Management: Not Supported 00:35:31.408 Device Self-Test: Not Supported 00:35:31.408 Directives: Not Supported 00:35:31.408 NVMe-MI: Not Supported 00:35:31.408 Virtualization Management: Not Supported 00:35:31.408 Doorbell Buffer Config: Not Supported 00:35:31.408 Get LBA Status Capability: Not Supported 00:35:31.408 Command & Feature Lockdown Capability: Not Supported 00:35:31.408 Abort Command Limit: 1 00:35:31.408 Async Event Request Limit: 1 00:35:31.408 Number of Firmware Slots: N/A 00:35:31.408 Firmware Slot 1 Read-Only: N/A 00:35:31.408 Firmware Activation Without Reset: N/A 00:35:31.408 Multiple Update Detection Support: N/A 00:35:31.408 Firmware Update Granularity: No Information Provided 00:35:31.408 Per-Namespace SMART Log: No 00:35:31.408 Asymmetric Namespace Access Log Page: Not Supported 00:35:31.408 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:31.408 Command Effects Log Page: Not Supported 00:35:31.408 Get Log Page Extended Data: Supported 00:35:31.408 Telemetry Log Pages: Not Supported 00:35:31.408 Persistent Event Log Pages: Not Supported 00:35:31.408 Supported Log Pages Log Page: May Support 00:35:31.408 Commands Supported & Effects Log Page: Not Supported 00:35:31.408 Feature Identifiers & Effects Log Page:May Support 00:35:31.408 NVMe-MI Commands & Effects Log Page: May Support 00:35:31.408 Data Area 4 for Telemetry Log: Not Supported 00:35:31.408 Error Log Page Entries Supported: 1 00:35:31.408 Keep Alive: Not Supported 00:35:31.408 00:35:31.408 NVM Command Set Attributes 00:35:31.408 ========================== 00:35:31.408 Submission Queue Entry Size 00:35:31.408 Max: 1 00:35:31.408 Min: 1 00:35:31.408 Completion Queue Entry Size 00:35:31.408 Max: 1 00:35:31.408 Min: 1 00:35:31.408 Number of Namespaces: 0 00:35:31.408 Compare Command: Not Supported 00:35:31.408 Write Uncorrectable Command: Not Supported 00:35:31.408 Dataset Management Command: Not Supported 00:35:31.408 Write Zeroes Command: Not Supported 00:35:31.408 Set Features Save Field: Not Supported 00:35:31.408 Reservations: Not Supported 00:35:31.408 Timestamp: Not Supported 00:35:31.408 Copy: Not Supported 00:35:31.408 Volatile Write Cache: Not Present 00:35:31.408 Atomic Write Unit (Normal): 1 00:35:31.408 Atomic Write Unit (PFail): 1 00:35:31.408 Atomic Compare & Write Unit: 1 00:35:31.408 Fused Compare & Write: Not Supported 00:35:31.408 Scatter-Gather List 00:35:31.408 SGL Command Set: Supported 00:35:31.408 SGL Keyed: Supported 00:35:31.408 SGL Bit Bucket Descriptor: Not Supported 00:35:31.408 SGL Metadata Pointer: Not Supported 00:35:31.408 Oversized SGL: Not Supported 00:35:31.408 SGL Metadata Address: Not Supported 00:35:31.408 SGL Offset: Supported 00:35:31.408 Transport SGL Data Block: Not Supported 00:35:31.408 Replay Protected Memory Block: Not Supported 00:35:31.408 00:35:31.408 Firmware Slot Information 00:35:31.408 ========================= 00:35:31.408 Active slot: 0 00:35:31.408 00:35:31.408 00:35:31.408 Error Log 00:35:31.408 ========= 00:35:31.408 00:35:31.408 Active Namespaces 00:35:31.408 ================= 00:35:31.408 Discovery Log Page 00:35:31.408 ================== 00:35:31.408 Generation Counter: 2 00:35:31.408 Number of Records: 2 00:35:31.408 Record Format: 0 00:35:31.408 00:35:31.408 Discovery Log Entry 0 00:35:31.408 ---------------------- 00:35:31.408 Transport Type: 1 (RDMA) 00:35:31.408 Address Family: 1 (IPv4) 00:35:31.408 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:31.408 Entry Flags: 00:35:31.408 Duplicate Returned Information: 0 00:35:31.408 Explicit Persistent Connection Support for Discovery: 0 00:35:31.408 Transport Requirements: 00:35:31.408 Secure Channel: Not Specified 00:35:31.408 Port ID: 1 (0x0001) 00:35:31.408 Controller ID: 65535 (0xffff) 00:35:31.408 Admin Max SQ Size: 32 00:35:31.408 Transport Service Identifier: 4420 00:35:31.408 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:31.408 Transport Address: 192.168.100.8 00:35:31.408 Transport Specific Address Subtype - RDMA 00:35:31.408 RDMA QP Service Type: 1 (Reliable Connected) 00:35:31.408 RDMA Provider Type: 1 (No provider specified) 00:35:31.408 RDMA CM Service: 1 (RDMA_CM) 00:35:31.408 Discovery Log Entry 1 00:35:31.408 ---------------------- 00:35:31.408 Transport Type: 1 (RDMA) 00:35:31.408 Address Family: 1 (IPv4) 00:35:31.408 Subsystem Type: 2 (NVM Subsystem) 00:35:31.408 Entry Flags: 00:35:31.408 Duplicate Returned Information: 0 00:35:31.408 Explicit Persistent Connection Support for Discovery: 0 00:35:31.408 Transport Requirements: 00:35:31.408 Secure Channel: Not Specified 00:35:31.408 Port ID: 1 (0x0001) 00:35:31.408 Controller ID: 65535 (0xffff) 00:35:31.408 Admin Max SQ Size: 32 00:35:31.408 Transport Service Identifier: 4420 00:35:31.408 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:31.408 Transport Address: 192.168.100.8 00:35:31.408 Transport Specific Address Subtype - RDMA 00:35:31.408 RDMA QP Service Type: 1 (Reliable Connected) 00:35:31.408 RDMA Provider Type: 1 (No provider specified) 00:35:31.408 RDMA CM Service: 1 (RDMA_CM) 00:35:31.408 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:31.669 get_feature(0x01) failed 00:35:31.669 get_feature(0x02) failed 00:35:31.669 get_feature(0x04) failed 00:35:31.669 ===================================================== 00:35:31.669 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:35:31.669 ===================================================== 00:35:31.669 Controller Capabilities/Features 00:35:31.669 ================================ 00:35:31.669 Vendor ID: 0000 00:35:31.669 Subsystem Vendor ID: 0000 00:35:31.669 Serial Number: fddb6b85a2180eaf45ad 00:35:31.669 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:31.669 Firmware Version: 6.8.9-20 00:35:31.669 Recommended Arb Burst: 6 00:35:31.669 IEEE OUI Identifier: 00 00 00 00:35:31.669 Multi-path I/O 00:35:31.669 May have multiple subsystem ports: Yes 00:35:31.669 May have multiple controllers: Yes 00:35:31.669 Associated with SR-IOV VF: No 00:35:31.669 Max Data Transfer Size: 1048576 00:35:31.669 Max Number of Namespaces: 1024 00:35:31.669 Max Number of I/O Queues: 128 00:35:31.669 NVMe Specification Version (VS): 1.3 00:35:31.669 NVMe Specification Version (Identify): 1.3 00:35:31.669 Maximum Queue Entries: 128 00:35:31.669 Contiguous Queues Required: No 00:35:31.669 Arbitration Mechanisms Supported 00:35:31.669 Weighted Round Robin: Not Supported 00:35:31.669 Vendor Specific: Not Supported 00:35:31.669 Reset Timeout: 7500 ms 00:35:31.669 Doorbell Stride: 4 bytes 00:35:31.669 NVM Subsystem Reset: Not Supported 00:35:31.669 Command Sets Supported 00:35:31.669 NVM Command Set: Supported 00:35:31.669 Boot Partition: Not Supported 00:35:31.669 Memory Page Size Minimum: 4096 bytes 00:35:31.669 Memory Page Size Maximum: 4096 bytes 00:35:31.669 Persistent Memory Region: Not Supported 00:35:31.669 Optional Asynchronous Events Supported 00:35:31.669 Namespace Attribute Notices: Supported 00:35:31.669 Firmware Activation Notices: Not Supported 00:35:31.669 ANA Change Notices: Supported 00:35:31.669 PLE Aggregate Log Change Notices: Not Supported 00:35:31.669 LBA Status Info Alert Notices: Not Supported 00:35:31.669 EGE Aggregate Log Change Notices: Not Supported 00:35:31.669 Normal NVM Subsystem Shutdown event: Not Supported 00:35:31.669 Zone Descriptor Change Notices: Not Supported 00:35:31.669 Discovery Log Change Notices: Not Supported 00:35:31.669 Controller Attributes 00:35:31.669 128-bit Host Identifier: Supported 00:35:31.669 Non-Operational Permissive Mode: Not Supported 00:35:31.669 NVM Sets: Not Supported 00:35:31.669 Read Recovery Levels: Not Supported 00:35:31.669 Endurance Groups: Not Supported 00:35:31.669 Predictable Latency Mode: Not Supported 00:35:31.669 Traffic Based Keep ALive: Supported 00:35:31.669 Namespace Granularity: Not Supported 00:35:31.669 SQ Associations: Not Supported 00:35:31.669 UUID List: Not Supported 00:35:31.669 Multi-Domain Subsystem: Not Supported 00:35:31.669 Fixed Capacity Management: Not Supported 00:35:31.669 Variable Capacity Management: Not Supported 00:35:31.669 Delete Endurance Group: Not Supported 00:35:31.669 Delete NVM Set: Not Supported 00:35:31.669 Extended LBA Formats Supported: Not Supported 00:35:31.669 Flexible Data Placement Supported: Not Supported 00:35:31.669 00:35:31.669 Controller Memory Buffer Support 00:35:31.669 ================================ 00:35:31.669 Supported: No 00:35:31.669 00:35:31.669 Persistent Memory Region Support 00:35:31.669 ================================ 00:35:31.669 Supported: No 00:35:31.669 00:35:31.669 Admin Command Set Attributes 00:35:31.669 ============================ 00:35:31.669 Security Send/Receive: Not Supported 00:35:31.669 Format NVM: Not Supported 00:35:31.669 Firmware Activate/Download: Not Supported 00:35:31.669 Namespace Management: Not Supported 00:35:31.669 Device Self-Test: Not Supported 00:35:31.669 Directives: Not Supported 00:35:31.669 NVMe-MI: Not Supported 00:35:31.669 Virtualization Management: Not Supported 00:35:31.669 Doorbell Buffer Config: Not Supported 00:35:31.669 Get LBA Status Capability: Not Supported 00:35:31.669 Command & Feature Lockdown Capability: Not Supported 00:35:31.669 Abort Command Limit: 4 00:35:31.669 Async Event Request Limit: 4 00:35:31.669 Number of Firmware Slots: N/A 00:35:31.669 Firmware Slot 1 Read-Only: N/A 00:35:31.669 Firmware Activation Without Reset: N/A 00:35:31.669 Multiple Update Detection Support: N/A 00:35:31.669 Firmware Update Granularity: No Information Provided 00:35:31.669 Per-Namespace SMART Log: Yes 00:35:31.669 Asymmetric Namespace Access Log Page: Supported 00:35:31.669 ANA Transition Time : 10 sec 00:35:31.669 00:35:31.669 Asymmetric Namespace Access Capabilities 00:35:31.669 ANA Optimized State : Supported 00:35:31.669 ANA Non-Optimized State : Supported 00:35:31.669 ANA Inaccessible State : Supported 00:35:31.669 ANA Persistent Loss State : Supported 00:35:31.669 ANA Change State : Supported 00:35:31.669 ANAGRPID is not changed : No 00:35:31.669 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:31.669 00:35:31.669 ANA Group Identifier Maximum : 128 00:35:31.669 Number of ANA Group Identifiers : 128 00:35:31.669 Max Number of Allowed Namespaces : 1024 00:35:31.669 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:31.669 Command Effects Log Page: Supported 00:35:31.669 Get Log Page Extended Data: Supported 00:35:31.669 Telemetry Log Pages: Not Supported 00:35:31.669 Persistent Event Log Pages: Not Supported 00:35:31.669 Supported Log Pages Log Page: May Support 00:35:31.669 Commands Supported & Effects Log Page: Not Supported 00:35:31.669 Feature Identifiers & Effects Log Page:May Support 00:35:31.669 NVMe-MI Commands & Effects Log Page: May Support 00:35:31.669 Data Area 4 for Telemetry Log: Not Supported 00:35:31.669 Error Log Page Entries Supported: 128 00:35:31.669 Keep Alive: Supported 00:35:31.669 Keep Alive Granularity: 1000 ms 00:35:31.669 00:35:31.669 NVM Command Set Attributes 00:35:31.669 ========================== 00:35:31.669 Submission Queue Entry Size 00:35:31.669 Max: 64 00:35:31.669 Min: 64 00:35:31.669 Completion Queue Entry Size 00:35:31.669 Max: 16 00:35:31.669 Min: 16 00:35:31.669 Number of Namespaces: 1024 00:35:31.669 Compare Command: Not Supported 00:35:31.669 Write Uncorrectable Command: Not Supported 00:35:31.669 Dataset Management Command: Supported 00:35:31.669 Write Zeroes Command: Supported 00:35:31.669 Set Features Save Field: Not Supported 00:35:31.669 Reservations: Not Supported 00:35:31.669 Timestamp: Not Supported 00:35:31.669 Copy: Not Supported 00:35:31.670 Volatile Write Cache: Present 00:35:31.670 Atomic Write Unit (Normal): 1 00:35:31.670 Atomic Write Unit (PFail): 1 00:35:31.670 Atomic Compare & Write Unit: 1 00:35:31.670 Fused Compare & Write: Not Supported 00:35:31.670 Scatter-Gather List 00:35:31.670 SGL Command Set: Supported 00:35:31.670 SGL Keyed: Supported 00:35:31.670 SGL Bit Bucket Descriptor: Not Supported 00:35:31.670 SGL Metadata Pointer: Not Supported 00:35:31.670 Oversized SGL: Not Supported 00:35:31.670 SGL Metadata Address: Not Supported 00:35:31.670 SGL Offset: Supported 00:35:31.670 Transport SGL Data Block: Not Supported 00:35:31.670 Replay Protected Memory Block: Not Supported 00:35:31.670 00:35:31.670 Firmware Slot Information 00:35:31.670 ========================= 00:35:31.670 Active slot: 0 00:35:31.670 00:35:31.670 Asymmetric Namespace Access 00:35:31.670 =========================== 00:35:31.670 Change Count : 0 00:35:31.670 Number of ANA Group Descriptors : 1 00:35:31.670 ANA Group Descriptor : 0 00:35:31.670 ANA Group ID : 1 00:35:31.670 Number of NSID Values : 1 00:35:31.670 Change Count : 0 00:35:31.670 ANA State : 1 00:35:31.670 Namespace Identifier : 1 00:35:31.670 00:35:31.670 Commands Supported and Effects 00:35:31.670 ============================== 00:35:31.670 Admin Commands 00:35:31.670 -------------- 00:35:31.670 Get Log Page (02h): Supported 00:35:31.670 Identify (06h): Supported 00:35:31.670 Abort (08h): Supported 00:35:31.670 Set Features (09h): Supported 00:35:31.670 Get Features (0Ah): Supported 00:35:31.670 Asynchronous Event Request (0Ch): Supported 00:35:31.670 Keep Alive (18h): Supported 00:35:31.670 I/O Commands 00:35:31.670 ------------ 00:35:31.670 Flush (00h): Supported 00:35:31.670 Write (01h): Supported LBA-Change 00:35:31.670 Read (02h): Supported 00:35:31.670 Write Zeroes (08h): Supported LBA-Change 00:35:31.670 Dataset Management (09h): Supported 00:35:31.670 00:35:31.670 Error Log 00:35:31.670 ========= 00:35:31.670 Entry: 0 00:35:31.670 Error Count: 0x3 00:35:31.670 Submission Queue Id: 0x0 00:35:31.670 Command Id: 0x5 00:35:31.670 Phase Bit: 0 00:35:31.670 Status Code: 0x2 00:35:31.670 Status Code Type: 0x0 00:35:31.670 Do Not Retry: 1 00:35:31.670 Error Location: 0x28 00:35:31.670 LBA: 0x0 00:35:31.670 Namespace: 0x0 00:35:31.670 Vendor Log Page: 0x0 00:35:31.670 ----------- 00:35:31.670 Entry: 1 00:35:31.670 Error Count: 0x2 00:35:31.670 Submission Queue Id: 0x0 00:35:31.670 Command Id: 0x5 00:35:31.670 Phase Bit: 0 00:35:31.670 Status Code: 0x2 00:35:31.670 Status Code Type: 0x0 00:35:31.670 Do Not Retry: 1 00:35:31.670 Error Location: 0x28 00:35:31.670 LBA: 0x0 00:35:31.670 Namespace: 0x0 00:35:31.670 Vendor Log Page: 0x0 00:35:31.670 ----------- 00:35:31.670 Entry: 2 00:35:31.670 Error Count: 0x1 00:35:31.670 Submission Queue Id: 0x0 00:35:31.670 Command Id: 0x0 00:35:31.670 Phase Bit: 0 00:35:31.670 Status Code: 0x2 00:35:31.670 Status Code Type: 0x0 00:35:31.670 Do Not Retry: 1 00:35:31.670 Error Location: 0x28 00:35:31.670 LBA: 0x0 00:35:31.670 Namespace: 0x0 00:35:31.670 Vendor Log Page: 0x0 00:35:31.670 00:35:31.670 Number of Queues 00:35:31.670 ================ 00:35:31.670 Number of I/O Submission Queues: 128 00:35:31.670 Number of I/O Completion Queues: 128 00:35:31.670 00:35:31.670 ZNS Specific Controller Data 00:35:31.670 ============================ 00:35:31.670 Zone Append Size Limit: 0 00:35:31.670 00:35:31.670 00:35:31.670 Active Namespaces 00:35:31.670 ================= 00:35:31.670 get_feature(0x05) failed 00:35:31.670 Namespace ID:1 00:35:31.670 Command Set Identifier: NVM (00h) 00:35:31.670 Deallocate: Supported 00:35:31.670 Deallocated/Unwritten Error: Not Supported 00:35:31.670 Deallocated Read Value: Unknown 00:35:31.670 Deallocate in Write Zeroes: Not Supported 00:35:31.670 Deallocated Guard Field: 0xFFFF 00:35:31.670 Flush: Supported 00:35:31.670 Reservation: Not Supported 00:35:31.670 Namespace Sharing Capabilities: Multiple Controllers 00:35:31.670 Size (in LBAs): 3907029168 (1863GiB) 00:35:31.670 Capacity (in LBAs): 3907029168 (1863GiB) 00:35:31.670 Utilization (in LBAs): 3907029168 (1863GiB) 00:35:31.670 UUID: c85ce53f-2a26-4785-b5ce-28054635ed86 00:35:31.670 Thin Provisioning: Not Supported 00:35:31.670 Per-NS Atomic Units: Yes 00:35:31.670 Atomic Boundary Size (Normal): 0 00:35:31.670 Atomic Boundary Size (PFail): 0 00:35:31.670 Atomic Boundary Offset: 0 00:35:31.670 NGUID/EUI64 Never Reused: No 00:35:31.670 ANA group ID: 1 00:35:31.670 Namespace Write Protected: No 00:35:31.670 Number of LBA Formats: 1 00:35:31.670 Current LBA Format: LBA Format #00 00:35:31.670 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:31.670 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:31.670 rmmod nvme_rdma 00:35:31.670 rmmod nvme_fabrics 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:35:31.670 11:08:10 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:35:35.864 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:35.864 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:37.777 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:35:37.777 00:35:37.777 real 0m20.083s 00:35:37.777 user 0m5.074s 00:35:37.777 sys 0m12.081s 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:37.777 ************************************ 00:35:37.777 END TEST nvmf_identify_kernel_target 00:35:37.777 ************************************ 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.777 ************************************ 00:35:37.777 START TEST nvmf_auth_host 00:35:37.777 ************************************ 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:35:37.777 * Looking for test storage... 00:35:37.777 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:37.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.777 --rc genhtml_branch_coverage=1 00:35:37.777 --rc genhtml_function_coverage=1 00:35:37.777 --rc genhtml_legend=1 00:35:37.777 --rc geninfo_all_blocks=1 00:35:37.777 --rc geninfo_unexecuted_blocks=1 00:35:37.777 00:35:37.777 ' 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:37.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.777 --rc genhtml_branch_coverage=1 00:35:37.777 --rc genhtml_function_coverage=1 00:35:37.777 --rc genhtml_legend=1 00:35:37.777 --rc geninfo_all_blocks=1 00:35:37.777 --rc geninfo_unexecuted_blocks=1 00:35:37.777 00:35:37.777 ' 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:37.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.777 --rc genhtml_branch_coverage=1 00:35:37.777 --rc genhtml_function_coverage=1 00:35:37.777 --rc genhtml_legend=1 00:35:37.777 --rc geninfo_all_blocks=1 00:35:37.777 --rc geninfo_unexecuted_blocks=1 00:35:37.777 00:35:37.777 ' 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:37.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.777 --rc genhtml_branch_coverage=1 00:35:37.777 --rc genhtml_function_coverage=1 00:35:37.777 --rc genhtml_legend=1 00:35:37.777 --rc geninfo_all_blocks=1 00:35:37.777 --rc geninfo_unexecuted_blocks=1 00:35:37.777 00:35:37.777 ' 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.777 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.778 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.778 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.778 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.778 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.778 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.037 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:38.037 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:38.037 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.037 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.037 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.037 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.037 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:38.037 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.037 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:38.038 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.038 11:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.163 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:35:46.164 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:35:46.164 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:35:46.164 Found net devices under 0000:d9:00.0: mlx_0_0 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:35:46.164 Found net devices under 0000:d9:00.1: mlx_0_1 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:35:46.164 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:46.164 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:35:46.164 altname enp217s0f0np0 00:35:46.164 altname ens818f0np0 00:35:46.164 inet 192.168.100.8/24 scope global mlx_0_0 00:35:46.164 valid_lft forever preferred_lft forever 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:46.164 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:35:46.165 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:46.165 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:35:46.165 altname enp217s0f1np1 00:35:46.165 altname ens818f1np1 00:35:46.165 inet 192.168.100.9/24 scope global mlx_0_1 00:35:46.165 valid_lft forever preferred_lft forever 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:35:46.165 192.168.100.9' 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:35:46.165 192.168.100.9' 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:35:46.165 192.168.100.9' 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:35:46.165 11:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:35:46.165 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:46.165 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:35:46.165 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:46.165 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:35:46.165 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:35:46.165 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:35:46.165 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:46.165 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:46.165 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:46.165 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.439 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1910166 00:35:46.439 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:46.439 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1910166 00:35:46.439 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1910166 ']' 00:35:46.439 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.439 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.439 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.439 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.439 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2ea2fd3656fe88cf1a6e2b26b788afcc 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.qyX 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2ea2fd3656fe88cf1a6e2b26b788afcc 0 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2ea2fd3656fe88cf1a6e2b26b788afcc 0 00:35:47.379 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.380 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.380 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2ea2fd3656fe88cf1a6e2b26b788afcc 00:35:47.380 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:47.380 11:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.qyX 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.qyX 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.qyX 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=17a23c6d891b6d6b10345720f83b129eaaf15802d658b59d92c44f2b515c9168 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.h65 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 17a23c6d891b6d6b10345720f83b129eaaf15802d658b59d92c44f2b515c9168 3 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 17a23c6d891b6d6b10345720f83b129eaaf15802d658b59d92c44f2b515c9168 3 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=17a23c6d891b6d6b10345720f83b129eaaf15802d658b59d92c44f2b515c9168 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.h65 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.h65 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.h65 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=074b21c790c91311b5dc19363b9a1ece142356215a77a026 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Fvs 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 074b21c790c91311b5dc19363b9a1ece142356215a77a026 0 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 074b21c790c91311b5dc19363b9a1ece142356215a77a026 0 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=074b21c790c91311b5dc19363b9a1ece142356215a77a026 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Fvs 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Fvs 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Fvs 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=76ee1e428204ad083b36feecaa7263ae4a2836555cfb2ae4 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dQW 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 76ee1e428204ad083b36feecaa7263ae4a2836555cfb2ae4 2 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 76ee1e428204ad083b36feecaa7263ae4a2836555cfb2ae4 2 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=76ee1e428204ad083b36feecaa7263ae4a2836555cfb2ae4 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dQW 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dQW 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.dQW 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=679e5526b09f0caa553a7ea43064c222 00:35:47.380 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:47.639 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.udA 00:35:47.639 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 679e5526b09f0caa553a7ea43064c222 1 00:35:47.639 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 679e5526b09f0caa553a7ea43064c222 1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=679e5526b09f0caa553a7ea43064c222 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.udA 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.udA 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.udA 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7962e6ed3deddd027fe7228a22501ff0 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Po4 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7962e6ed3deddd027fe7228a22501ff0 1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7962e6ed3deddd027fe7228a22501ff0 1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7962e6ed3deddd027fe7228a22501ff0 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Po4 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Po4 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Po4 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9da8d8e730553525f92c90e590780564144ddc6eabc8b50f 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.335 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9da8d8e730553525f92c90e590780564144ddc6eabc8b50f 2 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9da8d8e730553525f92c90e590780564144ddc6eabc8b50f 2 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9da8d8e730553525f92c90e590780564144ddc6eabc8b50f 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.335 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.335 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.335 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=191c387c7fa7478fa1bd12bf9eb0d7e1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Anc 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 191c387c7fa7478fa1bd12bf9eb0d7e1 0 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 191c387c7fa7478fa1bd12bf9eb0d7e1 0 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=191c387c7fa7478fa1bd12bf9eb0d7e1 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Anc 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Anc 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Anc 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:47.640 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:47.899 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9546cb99d056b30ae447466917852e2b5cc740fa7757766a042fbb5e2dec503c 00:35:47.899 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:47.899 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.n5K 00:35:47.899 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9546cb99d056b30ae447466917852e2b5cc740fa7757766a042fbb5e2dec503c 3 00:35:47.899 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9546cb99d056b30ae447466917852e2b5cc740fa7757766a042fbb5e2dec503c 3 00:35:47.899 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.899 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9546cb99d056b30ae447466917852e2b5cc740fa7757766a042fbb5e2dec503c 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.n5K 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.n5K 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.n5K 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1910166 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1910166 ']' 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qyX 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.900 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.h65 ]] 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.h65 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Fvs 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.dQW ]] 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dQW 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.udA 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Po4 ]] 00:35:48.159 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Po4 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.335 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Anc ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Anc 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.n5K 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:48.160 11:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:35:52.349 Waiting for block devices as requested 00:35:52.349 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:52.349 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:52.349 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:52.349 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:52.349 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:52.349 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:52.349 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:52.349 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:52.349 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:52.349 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:52.349 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:52.608 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:52.608 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:52.608 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:52.867 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:52.867 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:52.867 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:35:53.802 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:53.802 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:53.802 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:53.803 No valid GPT data, bailing 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:35:53.803 00:35:53.803 Discovery Log Number of Records 2, Generation counter 2 00:35:53.803 =====Discovery Log Entry 0====== 00:35:53.803 trtype: rdma 00:35:53.803 adrfam: ipv4 00:35:53.803 subtype: current discovery subsystem 00:35:53.803 treq: not specified, sq flow control disable supported 00:35:53.803 portid: 1 00:35:53.803 trsvcid: 4420 00:35:53.803 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:53.803 traddr: 192.168.100.8 00:35:53.803 eflags: none 00:35:53.803 rdma_prtype: not specified 00:35:53.803 rdma_qptype: connected 00:35:53.803 rdma_cms: rdma-cm 00:35:53.803 rdma_pkey: 0x0000 00:35:53.803 =====Discovery Log Entry 1====== 00:35:53.803 trtype: rdma 00:35:53.803 adrfam: ipv4 00:35:53.803 subtype: nvme subsystem 00:35:53.803 treq: not specified, sq flow control disable supported 00:35:53.803 portid: 1 00:35:53.803 trsvcid: 4420 00:35:53.803 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:53.803 traddr: 192.168.100.8 00:35:53.803 eflags: none 00:35:53.803 rdma_prtype: not specified 00:35:53.803 rdma_qptype: connected 00:35:53.803 rdma_cms: rdma-cm 00:35:53.803 rdma_pkey: 0x0000 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:53.803 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.062 nvme0n1 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.062 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.321 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.321 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.321 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.321 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.321 11:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.321 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.581 nvme0n1 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.581 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.582 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.582 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.582 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:54.582 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:54.582 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:54.582 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:54.582 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:54.582 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:54.582 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.582 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.841 nvme0n1 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.841 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.101 nvme0n1 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.101 11:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.360 nvme0n1 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:55.360 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.619 nvme0n1 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.619 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.879 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.139 nvme0n1 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.139 11:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.397 nvme0n1 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.397 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.654 nvme0n1 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.654 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.912 nvme0n1 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.912 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.170 11:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.428 nvme0n1 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.428 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.429 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.429 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.429 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:57.429 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:57.429 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:57.429 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:57.429 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:57.429 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:57.429 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.429 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.686 nvme0n1 00:35:57.686 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.686 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.686 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.686 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.686 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.686 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.686 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.686 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.686 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.686 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.945 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.204 nvme0n1 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.204 11:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.204 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:58.204 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:58.204 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:58.204 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:58.204 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:58.204 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.204 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.204 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.464 nvme0n1 00:35:58.464 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.464 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.464 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.464 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.464 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.464 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.464 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.464 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.464 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.464 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.722 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.723 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.981 nvme0n1 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.981 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.982 11:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.240 nvme0n1 00:35:59.240 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.240 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.240 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.240 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.240 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.240 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.498 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.499 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.757 nvme0n1 00:35:59.757 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.757 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.757 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.757 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.757 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:00.017 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.018 11:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.277 nvme0n1 00:36:00.277 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.277 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.277 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.277 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.277 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.537 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.796 nvme0n1 00:36:00.796 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.063 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:01.064 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.065 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:01.066 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:01.066 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:01.066 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:01.066 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:01.066 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:01.066 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.066 11:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.328 nvme0n1 00:36:01.328 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.328 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.328 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.328 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.328 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.328 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.587 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.587 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.587 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.587 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.587 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.587 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.588 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.846 nvme0n1 00:36:01.846 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.846 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.846 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.846 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.846 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.106 11:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.676 nvme0n1 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.676 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.677 11:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.615 nvme0n1 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.615 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.184 nvme0n1 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.184 11:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.753 nvme0n1 00:36:04.753 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.753 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.753 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.753 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.753 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.753 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.753 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.753 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.753 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.753 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.013 11:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.581 nvme0n1 00:36:05.581 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.581 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.581 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.581 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.581 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.581 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.582 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.841 nvme0n1 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.841 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.842 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.101 nvme0n1 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.101 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:06.102 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.102 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:06.102 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.102 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.361 11:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.361 nvme0n1 00:36:06.361 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.361 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.361 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.361 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.361 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.361 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.361 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.361 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.361 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.361 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:06.620 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.621 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.621 nvme0n1 00:36:06.621 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:06.879 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.880 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.139 nvme0n1 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.139 11:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.399 nvme0n1 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.399 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.400 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:07.400 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:07.400 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:07.400 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:07.400 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:07.400 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:07.400 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.400 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.659 nvme0n1 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.659 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.918 nvme0n1 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.918 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.178 11:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.439 nvme0n1 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.439 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.699 nvme0n1 00:36:08.699 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.699 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.699 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.699 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.700 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.959 nvme0n1 00:36:08.959 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.959 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.959 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.959 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.959 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.219 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.219 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.219 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.220 11:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.484 nvme0n1 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.484 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.796 nvme0n1 00:36:09.796 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.796 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.796 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.796 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.796 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.796 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.796 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.171 11:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.431 nvme0n1 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.431 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.691 nvme0n1 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.691 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.260 nvme0n1 00:36:11.260 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.260 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.260 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.260 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.260 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.260 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.260 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.260 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.260 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.260 11:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.260 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.829 nvme0n1 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.829 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.397 nvme0n1 00:36:12.397 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.397 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.397 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.397 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.397 11:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.397 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.656 nvme0n1 00:36:12.656 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.657 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.657 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.657 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.657 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.917 11:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.176 nvme0n1 00:36:13.176 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.176 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.176 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.176 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.176 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.437 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.006 nvme0n1 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.006 11:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.576 nvme0n1 00:36:14.576 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.576 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.576 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.576 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.576 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.576 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:14.835 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.836 11:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.404 nvme0n1 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:15.404 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.405 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.971 nvme0n1 00:36:15.971 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.229 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.230 11:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.796 nvme0n1 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.796 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.055 nvme0n1 00:36:17.055 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.055 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.055 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.055 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.055 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.055 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.055 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.055 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.055 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.055 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.314 11:08:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.314 nvme0n1 00:36:17.314 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.314 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.314 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.314 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.314 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.314 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:17.574 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.575 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.834 nvme0n1 00:36:17.834 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.834 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.834 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.834 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.834 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.834 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.834 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.834 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.835 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.096 nvme0n1 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.096 11:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.356 nvme0n1 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:18.356 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.357 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.616 nvme0n1 00:36:18.616 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.616 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.616 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.616 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.616 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.616 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.616 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.616 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.616 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.616 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:18.875 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.876 nvme0n1 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.876 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.135 11:08:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.395 nvme0n1 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.395 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.655 nvme0n1 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.655 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.915 nvme0n1 00:36:19.915 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.915 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.915 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.915 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.915 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.915 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.915 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.915 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.915 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.915 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:20.174 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.175 11:08:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.435 nvme0n1 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.435 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.694 nvme0n1 00:36:20.694 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.694 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.694 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.694 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.694 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.694 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.953 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.213 nvme0n1 00:36:21.213 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.213 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.213 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.213 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.213 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.213 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.213 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.213 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.213 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.213 11:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.213 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.472 nvme0n1 00:36:21.472 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.472 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.472 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.472 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.472 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.472 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:21.731 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.732 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.991 nvme0n1 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:21.991 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.992 11:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.560 nvme0n1 00:36:22.560 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.561 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.130 nvme0n1 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.130 11:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.699 nvme0n1 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:23.699 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:23.700 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:23.700 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:23.700 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.700 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.960 nvme0n1 00:36:23.960 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.960 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.960 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.960 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.960 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.960 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.220 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.220 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.220 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.220 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.220 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.220 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.220 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:24.220 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.220 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.221 11:09:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.481 nvme0n1 00:36:24.481 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.481 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.481 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.481 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.481 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.481 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.741 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.741 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.741 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhMmZkMzY1NmZlODhjZjFhNmUyYjI2Yjc4OGFmY2NLngMw: 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: ]] 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTdhMjNjNmQ4OTFiNmQ2YjEwMzQ1NzIwZjgzYjEyOWVhYWYxNTgwMmQ2NThiNTlkOTJjNDRmMmI1MTVjOTE2OGBA/Yk=: 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.742 11:09:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.311 nvme0n1 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.311 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.250 nvme0n1 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.250 11:09:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.819 nvme0n1 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRhOGQ4ZTczMDU1MzUyNWY5MmM5MGU1OTA3ODA1NjQxNDRkZGM2ZWFiYzhiNTBm7147cw==: 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: ]] 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTkxYzM4N2M3ZmE3NDc4ZmExYmQxMmJmOWViMGQ3ZTH3XufF: 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.819 11:09:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.387 nvme0n1 00:36:27.387 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.387 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.387 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.387 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.387 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.387 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.387 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.387 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.387 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.387 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTU0NmNiOTlkMDU2YjMwYWU0NDc0NjY5MTc4NTJlMmI1Y2M3NDBmYTc3NTc3NjZhMDQyZmJiNWUyZGVjNTAzY+7gQcE=: 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.646 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.647 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.224 nvme0n1 00:36:28.224 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.224 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.224 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.224 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.224 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.224 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.224 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.224 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.224 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.224 11:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.224 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.224 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:28.224 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.224 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:28.224 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:28.224 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.225 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.485 request: 00:36:28.485 { 00:36:28.485 "name": "nvme0", 00:36:28.485 "trtype": "rdma", 00:36:28.485 "traddr": "192.168.100.8", 00:36:28.485 "adrfam": "ipv4", 00:36:28.485 "trsvcid": "4420", 00:36:28.485 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:28.485 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:28.485 "prchk_reftag": false, 00:36:28.485 "prchk_guard": false, 00:36:28.485 "hdgst": false, 00:36:28.485 "ddgst": false, 00:36:28.485 "allow_unrecognized_csi": false, 00:36:28.485 "method": "bdev_nvme_attach_controller", 00:36:28.485 "req_id": 1 00:36:28.485 } 00:36:28.485 Got JSON-RPC error response 00:36:28.485 response: 00:36:28.485 { 00:36:28.485 "code": -5, 00:36:28.485 "message": "Input/output error" 00:36:28.485 } 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.485 request: 00:36:28.485 { 00:36:28.485 "name": "nvme0", 00:36:28.485 "trtype": "rdma", 00:36:28.485 "traddr": "192.168.100.8", 00:36:28.485 "adrfam": "ipv4", 00:36:28.485 "trsvcid": "4420", 00:36:28.485 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:28.485 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:28.485 "prchk_reftag": false, 00:36:28.485 "prchk_guard": false, 00:36:28.485 "hdgst": false, 00:36:28.485 "ddgst": false, 00:36:28.485 "dhchap_key": "key2", 00:36:28.485 "allow_unrecognized_csi": false, 00:36:28.485 "method": "bdev_nvme_attach_controller", 00:36:28.485 "req_id": 1 00:36:28.485 } 00:36:28.485 Got JSON-RPC error response 00:36:28.485 response: 00:36:28.485 { 00:36:28.485 "code": -5, 00:36:28.485 "message": "Input/output error" 00:36:28.485 } 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.485 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.746 request: 00:36:28.746 { 00:36:28.746 "name": "nvme0", 00:36:28.746 "trtype": "rdma", 00:36:28.746 "traddr": "192.168.100.8", 00:36:28.746 "adrfam": "ipv4", 00:36:28.746 "trsvcid": "4420", 00:36:28.746 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:28.746 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:28.746 "prchk_reftag": false, 00:36:28.746 "prchk_guard": false, 00:36:28.746 "hdgst": false, 00:36:28.746 "ddgst": false, 00:36:28.746 "dhchap_key": "key1", 00:36:28.746 "dhchap_ctrlr_key": "ckey2", 00:36:28.746 "allow_unrecognized_csi": false, 00:36:28.746 "method": "bdev_nvme_attach_controller", 00:36:28.746 "req_id": 1 00:36:28.746 } 00:36:28.746 Got JSON-RPC error response 00:36:28.746 response: 00:36:28.746 { 00:36:28.746 "code": -5, 00:36:28.746 "message": "Input/output error" 00:36:28.746 } 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.746 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.006 nvme0n1 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.006 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.266 request: 00:36:29.266 { 00:36:29.266 "name": "nvme0", 00:36:29.266 "dhchap_key": "key1", 00:36:29.266 "dhchap_ctrlr_key": "ckey2", 00:36:29.266 "method": "bdev_nvme_set_keys", 00:36:29.266 "req_id": 1 00:36:29.266 } 00:36:29.266 Got JSON-RPC error response 00:36:29.266 response: 00:36:29.266 { 00:36:29.266 "code": -13, 00:36:29.266 "message": "Permission denied" 00:36:29.266 } 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:29.266 11:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:30.204 11:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.204 11:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:30.204 11:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.204 11:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.204 11:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.204 11:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:30.204 11:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:31.142 11:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.142 11:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:31.142 11:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.142 11:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.143 11:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.402 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:31.402 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:31.402 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.402 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.402 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:31.402 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:31.402 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:31.402 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:31.402 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0YjIxYzc5MGM5MTMxMWI1ZGMxOTM2M2I5YTFlY2UxNDIzNTYyMTVhNzdhMDI23gNpXQ==: 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: ]] 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzZlZTFlNDI4MjA0YWQwODNiMzZmZWVjYWE3MjYzYWU0YTI4MzY1NTVjZmIyYWU04bsYVg==: 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.403 nvme0n1 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njc5ZTU1MjZiMDlmMGNhYTU1M2E3ZWE0MzA2NGMyMjKFoIZn: 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: ]] 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk2MmU2ZWQzZGVkZGQwMjdmZTcyMjhhMjI1MDFmZjC+B2eN: 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.403 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.663 request: 00:36:31.663 { 00:36:31.663 "name": "nvme0", 00:36:31.663 "dhchap_key": "key2", 00:36:31.663 "dhchap_ctrlr_key": "ckey1", 00:36:31.663 "method": "bdev_nvme_set_keys", 00:36:31.663 "req_id": 1 00:36:31.663 } 00:36:31.663 Got JSON-RPC error response 00:36:31.663 response: 00:36:31.663 { 00:36:31.663 "code": -13, 00:36:31.663 "message": "Permission denied" 00:36:31.663 } 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:31.663 11:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:32.599 11:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.599 11:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:32.599 11:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.599 11:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.599 11:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.599 11:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:32.599 11:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:33.536 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.536 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:33.536 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.536 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:36:33.795 rmmod nvme_rdma 00:36:33.795 rmmod nvme_fabrics 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1910166 ']' 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1910166 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1910166 ']' 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1910166 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1910166 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1910166' 00:36:33.795 killing process with pid 1910166 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1910166 00:36:33.795 11:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1910166 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:36:34.732 11:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:36:38.929 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:38.929 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:38.929 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:38.929 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:38.929 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:38.930 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:40.838 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:36:40.838 11:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.qyX /tmp/spdk.key-null.Fvs /tmp/spdk.key-sha256.udA /tmp/spdk.key-sha384.335 /tmp/spdk.key-sha512.n5K /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:36:40.838 11:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:36:45.036 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:45.036 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:45.036 00:36:45.036 real 1m6.904s 00:36:45.036 user 0m58.219s 00:36:45.036 sys 0m18.281s 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.036 ************************************ 00:36:45.036 END TEST nvmf_auth_host 00:36:45.036 ************************************ 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.036 ************************************ 00:36:45.036 START TEST nvmf_bdevperf 00:36:45.036 ************************************ 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:36:45.036 * Looking for test storage... 00:36:45.036 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:45.036 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.037 --rc genhtml_branch_coverage=1 00:36:45.037 --rc genhtml_function_coverage=1 00:36:45.037 --rc genhtml_legend=1 00:36:45.037 --rc geninfo_all_blocks=1 00:36:45.037 --rc geninfo_unexecuted_blocks=1 00:36:45.037 00:36:45.037 ' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.037 --rc genhtml_branch_coverage=1 00:36:45.037 --rc genhtml_function_coverage=1 00:36:45.037 --rc genhtml_legend=1 00:36:45.037 --rc geninfo_all_blocks=1 00:36:45.037 --rc geninfo_unexecuted_blocks=1 00:36:45.037 00:36:45.037 ' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.037 --rc genhtml_branch_coverage=1 00:36:45.037 --rc genhtml_function_coverage=1 00:36:45.037 --rc genhtml_legend=1 00:36:45.037 --rc geninfo_all_blocks=1 00:36:45.037 --rc geninfo_unexecuted_blocks=1 00:36:45.037 00:36:45.037 ' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.037 --rc genhtml_branch_coverage=1 00:36:45.037 --rc genhtml_function_coverage=1 00:36:45.037 --rc genhtml_legend=1 00:36:45.037 --rc geninfo_all_blocks=1 00:36:45.037 --rc geninfo_unexecuted_blocks=1 00:36:45.037 00:36:45.037 ' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:45.037 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:45.037 11:09:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:53.160 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:36:53.161 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:36:53.161 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:36:53.161 Found net devices under 0000:d9:00.0: mlx_0_0 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:36:53.161 Found net devices under 0000:d9:00.1: mlx_0_1 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:36:53.161 11:09:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:36:53.161 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:53.161 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:36:53.161 altname enp217s0f0np0 00:36:53.161 altname ens818f0np0 00:36:53.161 inet 192.168.100.8/24 scope global mlx_0_0 00:36:53.161 valid_lft forever preferred_lft forever 00:36:53.161 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:53.161 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:36:53.161 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:53.161 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:53.161 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:53.161 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:53.161 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:36:53.161 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:36:53.161 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:36:53.161 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:53.161 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:36:53.161 altname enp217s0f1np1 00:36:53.161 altname ens818f1np1 00:36:53.161 inet 192.168.100.9/24 scope global mlx_0_1 00:36:53.161 valid_lft forever preferred_lft forever 00:36:53.161 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:53.162 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:53.162 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:53.162 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:36:53.162 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:36:53.162 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:36:53.162 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:53.162 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:53.162 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:53.162 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:36:53.422 192.168.100.9' 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:36:53.422 192.168.100.9' 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:36:53.422 192.168.100.9' 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1927478 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1927478 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1927478 ']' 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:53.422 11:09:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.422 [2024-11-28 11:09:32.241579] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:36:53.422 [2024-11-28 11:09:32.241684] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:53.682 [2024-11-28 11:09:32.399899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:53.682 [2024-11-28 11:09:32.505064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:53.682 [2024-11-28 11:09:32.505131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:53.682 [2024-11-28 11:09:32.505145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:53.682 [2024-11-28 11:09:32.505160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:53.682 [2024-11-28 11:09:32.505171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:53.682 [2024-11-28 11:09:32.507659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:53.682 [2024-11-28 11:09:32.507722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.682 [2024-11-28 11:09:32.507728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:54.251 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:54.251 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:54.251 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:54.252 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:54.252 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.252 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:54.252 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:36:54.252 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.252 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.252 [2024-11-28 11:09:33.121861] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fae23976940) succeed. 00:36:54.252 [2024-11-28 11:09:33.131445] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fae23931940) succeed. 00:36:54.511 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.511 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:54.511 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.511 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.771 Malloc0 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.771 [2024-11-28 11:09:33.426327] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:54.771 { 00:36:54.771 "params": { 00:36:54.771 "name": "Nvme$subsystem", 00:36:54.771 "trtype": "$TEST_TRANSPORT", 00:36:54.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:54.771 "adrfam": "ipv4", 00:36:54.771 "trsvcid": "$NVMF_PORT", 00:36:54.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:54.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:54.771 "hdgst": ${hdgst:-false}, 00:36:54.771 "ddgst": ${ddgst:-false} 00:36:54.771 }, 00:36:54.771 "method": "bdev_nvme_attach_controller" 00:36:54.771 } 00:36:54.771 EOF 00:36:54.771 )") 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:54.771 11:09:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:54.771 "params": { 00:36:54.771 "name": "Nvme1", 00:36:54.771 "trtype": "rdma", 00:36:54.771 "traddr": "192.168.100.8", 00:36:54.771 "adrfam": "ipv4", 00:36:54.771 "trsvcid": "4420", 00:36:54.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:54.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:54.771 "hdgst": false, 00:36:54.771 "ddgst": false 00:36:54.771 }, 00:36:54.771 "method": "bdev_nvme_attach_controller" 00:36:54.771 }' 00:36:54.771 [2024-11-28 11:09:33.513146] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:36:54.771 [2024-11-28 11:09:33.513238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1927760 ] 00:36:55.031 [2024-11-28 11:09:33.670181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.031 [2024-11-28 11:09:33.772658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.600 Running I/O for 1 seconds... 00:36:56.538 15561.00 IOPS, 60.79 MiB/s 00:36:56.538 Latency(us) 00:36:56.538 [2024-11-28T10:09:35.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.538 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:56.538 Verification LBA range: start 0x0 length 0x4000 00:36:56.538 Nvme1n1 : 1.01 15593.57 60.91 0.00 0.00 8163.55 2922.91 18350.08 00:36:56.538 [2024-11-28T10:09:35.419Z] =================================================================================================================== 00:36:56.538 [2024-11-28T10:09:35.419Z] Total : 15593.57 60.91 0.00 0.00 8163.55 2922.91 18350.08 00:36:57.476 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1928251 00:36:57.476 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:57.476 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:57.476 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:57.476 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:57.476 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:57.477 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:57.477 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:57.477 { 00:36:57.477 "params": { 00:36:57.477 "name": "Nvme$subsystem", 00:36:57.477 "trtype": "$TEST_TRANSPORT", 00:36:57.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:57.477 "adrfam": "ipv4", 00:36:57.477 "trsvcid": "$NVMF_PORT", 00:36:57.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:57.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:57.477 "hdgst": ${hdgst:-false}, 00:36:57.477 "ddgst": ${ddgst:-false} 00:36:57.477 }, 00:36:57.477 "method": "bdev_nvme_attach_controller" 00:36:57.477 } 00:36:57.477 EOF 00:36:57.477 )") 00:36:57.477 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:57.477 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:57.477 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:57.477 11:09:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:57.477 "params": { 00:36:57.477 "name": "Nvme1", 00:36:57.477 "trtype": "rdma", 00:36:57.477 "traddr": "192.168.100.8", 00:36:57.477 "adrfam": "ipv4", 00:36:57.477 "trsvcid": "4420", 00:36:57.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:57.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:57.477 "hdgst": false, 00:36:57.477 "ddgst": false 00:36:57.477 }, 00:36:57.477 "method": "bdev_nvme_attach_controller" 00:36:57.477 }' 00:36:57.477 [2024-11-28 11:09:36.171930] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:36:57.477 [2024-11-28 11:09:36.172024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1928251 ] 00:36:57.477 [2024-11-28 11:09:36.325787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.737 [2024-11-28 11:09:36.429068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.997 Running I/O for 15 seconds... 00:37:00.318 15490.00 IOPS, 60.51 MiB/s [2024-11-28T10:09:39.199Z] 15665.50 IOPS, 61.19 MiB/s [2024-11-28T10:09:39.199Z] 11:09:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1927478 00:37:00.318 11:09:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:01.520 11818.67 IOPS, 46.17 MiB/s [2024-11-28T10:09:40.401Z] [2024-11-28 11:09:40.153432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.153975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.153986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.154012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.154036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.154063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.154087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.154112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.154137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.154161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.154186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.154211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.520 [2024-11-28 11:09:40.154236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.520 [2024-11-28 11:09:40.154251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x182f00 00:37:01.520 [2024-11-28 11:09:40.154264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.154977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.154989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.155003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.155015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.155028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.155039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.155053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.155065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.155078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.155090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.155104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.155115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.155129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.155141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.155154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.155166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.155180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.155191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.521 [2024-11-28 11:09:40.155205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x182f00 00:37:01.521 [2024-11-28 11:09:40.155217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.155987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.155999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.156010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.156023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.156035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.156048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.156059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.156072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.156083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.156097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.156110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.522 [2024-11-28 11:09:40.156123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182f00 00:37:01.522 [2024-11-28 11:09:40.156134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.156743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.156760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x182f00 00:37:01.523 [2024-11-28 11:09:40.166488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.168823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:01.523 [2024-11-28 11:09:40.168847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:01.523 [2024-11-28 11:09:40.168864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22288 len:8 PRP1 0x0 PRP2 0x0 00:37:01.523 [2024-11-28 11:09:40.168881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.169129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:01.523 [2024-11-28 11:09:40.169149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.169167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:01.523 [2024-11-28 11:09:40.169183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.169198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:01.523 [2024-11-28 11:09:40.169214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.169230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:37:01.523 [2024-11-28 11:09:40.169245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:01.523 [2024-11-28 11:09:40.200236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:01.523 [2024-11-28 11:09:40.200308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.523 [2024-11-28 11:09:40.200353] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:37:01.523 [2024-11-28 11:09:40.203817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.523 [2024-11-28 11:09:40.208130] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:01.523 [2024-11-28 11:09:40.208156] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:01.523 [2024-11-28 11:09:40.208168] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:37:02.352 8864.00 IOPS, 34.62 MiB/s [2024-11-28T10:09:41.233Z] [2024-11-28 11:09:41.212507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:02.352 [2024-11-28 11:09:41.212584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.352 [2024-11-28 11:09:41.212821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.352 [2024-11-28 11:09:41.212841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.352 [2024-11-28 11:09:41.212853] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:02.352 [2024-11-28 11:09:41.212869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.352 [2024-11-28 11:09:41.218588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.352 [2024-11-28 11:09:41.222259] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:02.352 [2024-11-28 11:09:41.222293] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:02.352 [2024-11-28 11:09:41.222309] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:37:03.290 7091.20 IOPS, 27.70 MiB/s [2024-11-28T10:09:42.171Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1927478 Killed "${NVMF_APP[@]}" "$@" 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1929108 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1929108 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1929108 ']' 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:03.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:03.290 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:03.549 [2024-11-28 11:09:42.185970] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:37:03.549 [2024-11-28 11:09:42.186066] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:03.549 [2024-11-28 11:09:42.226842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:03.549 [2024-11-28 11:09:42.226884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.549 [2024-11-28 11:09:42.227087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.549 [2024-11-28 11:09:42.227103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.549 [2024-11-28 11:09:42.227117] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:03.549 [2024-11-28 11:09:42.227134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.549 [2024-11-28 11:09:42.233816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.550 [2024-11-28 11:09:42.237168] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:03.550 [2024-11-28 11:09:42.237196] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:03.550 [2024-11-28 11:09:42.237207] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:37:03.550 [2024-11-28 11:09:42.347142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:03.809 [2024-11-28 11:09:42.451778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:03.809 [2024-11-28 11:09:42.451826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:03.809 [2024-11-28 11:09:42.451838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:03.809 [2024-11-28 11:09:42.451851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:03.809 [2024-11-28 11:09:42.451861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:03.809 [2024-11-28 11:09:42.454319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:03.809 [2024-11-28 11:09:42.454384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:03.809 [2024-11-28 11:09:42.454391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:04.329 5909.33 IOPS, 23.08 MiB/s [2024-11-28T10:09:43.210Z] 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:04.329 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:04.329 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:04.329 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:04.329 11:09:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:04.329 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:04.329 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:37:04.329 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.329 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:04.329 [2024-11-28 11:09:43.059214] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f6f9499a940) succeed. 00:37:04.329 [2024-11-28 11:09:43.068794] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f6f94956940) succeed. 00:37:04.588 [2024-11-28 11:09:43.241433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:04.588 [2024-11-28 11:09:43.241479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:04.588 [2024-11-28 11:09:43.241687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:04.588 [2024-11-28 11:09:43.241702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:04.588 [2024-11-28 11:09:43.241716] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:04.588 [2024-11-28 11:09:43.241733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:04.588 [2024-11-28 11:09:43.250129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:04.589 [2024-11-28 11:09:43.253385] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:04.589 [2024-11-28 11:09:43.253413] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:04.589 [2024-11-28 11:09:43.253424] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:04.589 Malloc0 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:04.589 [2024-11-28 11:09:43.361066] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.589 11:09:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1928251 00:37:05.415 5065.14 IOPS, 19.79 MiB/s [2024-11-28T10:09:44.296Z] [2024-11-28 11:09:44.257802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:05.415 [2024-11-28 11:09:44.257837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:05.415 [2024-11-28 11:09:44.258033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:05.415 [2024-11-28 11:09:44.258049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:05.415 [2024-11-28 11:09:44.258062] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:05.415 [2024-11-28 11:09:44.258078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:05.415 [2024-11-28 11:09:44.265382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:05.674 [2024-11-28 11:09:44.302539] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:07.179 5497.00 IOPS, 21.47 MiB/s [2024-11-28T10:09:46.999Z] 6628.44 IOPS, 25.89 MiB/s [2024-11-28T10:09:48.092Z] 7535.20 IOPS, 29.43 MiB/s [2024-11-28T10:09:49.128Z] 8278.36 IOPS, 32.34 MiB/s [2024-11-28T10:09:50.065Z] 8889.33 IOPS, 34.72 MiB/s [2024-11-28T10:09:51.001Z] 9411.77 IOPS, 36.76 MiB/s [2024-11-28T10:09:51.940Z] 9853.14 IOPS, 38.49 MiB/s [2024-11-28T10:09:51.940Z] 10243.00 IOPS, 40.01 MiB/s 00:37:13.059 Latency(us) 00:37:13.059 [2024-11-28T10:09:51.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.059 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:13.059 Verification LBA range: start 0x0 length 0x4000 00:37:13.059 Nvme1n1 : 15.01 10243.28 40.01 12526.55 0.00 5600.75 704.51 1107296.26 00:37:13.059 [2024-11-28T10:09:51.940Z] =================================================================================================================== 00:37:13.059 [2024-11-28T10:09:51.940Z] Total : 10243.28 40.01 12526.55 0.00 5600.75 704.51 1107296.26 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:37:13.998 rmmod nvme_rdma 00:37:13.998 rmmod nvme_fabrics 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1929108 ']' 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1929108 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1929108 ']' 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1929108 00:37:13.998 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:37:14.257 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:14.257 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1929108 00:37:14.257 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:14.257 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:14.257 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1929108' 00:37:14.257 killing process with pid 1929108 00:37:14.257 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1929108 00:37:14.257 11:09:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1929108 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:37:16.166 00:37:16.166 real 0m31.164s 00:37:16.166 user 1m16.487s 00:37:16.166 sys 0m8.366s 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:16.166 ************************************ 00:37:16.166 END TEST nvmf_bdevperf 00:37:16.166 ************************************ 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.166 ************************************ 00:37:16.166 START TEST nvmf_target_disconnect 00:37:16.166 ************************************ 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:37:16.166 * Looking for test storage... 00:37:16.166 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:16.166 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:16.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.167 --rc genhtml_branch_coverage=1 00:37:16.167 --rc genhtml_function_coverage=1 00:37:16.167 --rc genhtml_legend=1 00:37:16.167 --rc geninfo_all_blocks=1 00:37:16.167 --rc geninfo_unexecuted_blocks=1 00:37:16.167 00:37:16.167 ' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:16.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.167 --rc genhtml_branch_coverage=1 00:37:16.167 --rc genhtml_function_coverage=1 00:37:16.167 --rc genhtml_legend=1 00:37:16.167 --rc geninfo_all_blocks=1 00:37:16.167 --rc geninfo_unexecuted_blocks=1 00:37:16.167 00:37:16.167 ' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:16.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.167 --rc genhtml_branch_coverage=1 00:37:16.167 --rc genhtml_function_coverage=1 00:37:16.167 --rc genhtml_legend=1 00:37:16.167 --rc geninfo_all_blocks=1 00:37:16.167 --rc geninfo_unexecuted_blocks=1 00:37:16.167 00:37:16.167 ' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:16.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.167 --rc genhtml_branch_coverage=1 00:37:16.167 --rc genhtml_function_coverage=1 00:37:16.167 --rc genhtml_legend=1 00:37:16.167 --rc geninfo_all_blocks=1 00:37:16.167 --rc geninfo_unexecuted_blocks=1 00:37:16.167 00:37:16.167 ' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:16.167 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:16.167 11:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:37:24.291 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:37:24.291 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:37:24.291 Found net devices under 0000:d9:00.0: mlx_0_0 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:37:24.291 Found net devices under 0000:d9:00.1: mlx_0_1 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.291 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:37:24.292 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:24.292 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:37:24.292 altname enp217s0f0np0 00:37:24.292 altname ens818f0np0 00:37:24.292 inet 192.168.100.8/24 scope global mlx_0_0 00:37:24.292 valid_lft forever preferred_lft forever 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:37:24.292 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:24.292 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:37:24.292 altname enp217s0f1np1 00:37:24.292 altname ens818f1np1 00:37:24.292 inet 192.168.100.9/24 scope global mlx_0_1 00:37:24.292 valid_lft forever preferred_lft forever 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:37:24.292 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:37:24.293 192.168.100.9' 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:37:24.293 192.168.100.9' 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:37:24.293 11:10:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:37:24.293 192.168.100.9' 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:24.293 ************************************ 00:37:24.293 START TEST nvmf_target_disconnect_tc1 00:37:24.293 ************************************ 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:37:24.293 11:10:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:24.553 [2024-11-28 11:10:03.318681] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:24.553 [2024-11-28 11:10:03.318750] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:24.553 [2024-11-28 11:10:03.318764] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d6ec0 00:37:25.491 [2024-11-28 11:10:04.323048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:37:25.491 [2024-11-28 11:10:04.323148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:37:25.491 [2024-11-28 11:10:04.323207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:37:25.491 [2024-11-28 11:10:04.323381] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:25.491 [2024-11-28 11:10:04.323430] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:25.491 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:37:25.491 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:25.751 Initializing NVMe Controllers 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:25.751 00:37:25.751 real 0m1.347s 00:37:25.751 user 0m0.919s 00:37:25.751 sys 0m0.414s 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:25.751 ************************************ 00:37:25.751 END TEST nvmf_target_disconnect_tc1 00:37:25.751 ************************************ 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:25.751 ************************************ 00:37:25.751 START TEST nvmf_target_disconnect_tc2 00:37:25.751 ************************************ 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1935451 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1935451 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1935451 ']' 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:25.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:25.751 11:10:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:25.751 [2024-11-28 11:10:04.595329] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:37:25.751 [2024-11-28 11:10:04.595446] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:26.010 [2024-11-28 11:10:04.746221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:26.010 [2024-11-28 11:10:04.842570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:26.010 [2024-11-28 11:10:04.842623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:26.010 [2024-11-28 11:10:04.842636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:26.010 [2024-11-28 11:10:04.842648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:26.010 [2024-11-28 11:10:04.842658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:26.010 [2024-11-28 11:10:04.845090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:26.010 [2024-11-28 11:10:04.845173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:26.010 [2024-11-28 11:10:04.845239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:26.010 [2024-11-28 11:10:04.845265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:26.578 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:26.579 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:26.579 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:26.579 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:26.579 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:26.579 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:26.579 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:26.579 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.579 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:26.838 Malloc0 00:37:26.838 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.838 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:26.838 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.838 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:26.838 [2024-11-28 11:10:05.563669] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7f700e568940) succeed. 00:37:26.838 [2024-11-28 11:10:05.573567] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7f700e524940) succeed. 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:27.098 [2024-11-28 11:10:05.852874] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1935730 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:27.098 11:10:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:29.005 11:10:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1935451 00:37:29.005 11:10:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Write completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 Read completed with error (sct=0, sc=8) 00:37:30.385 starting I/O failed 00:37:30.385 [2024-11-28 11:10:09.168597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:31.324 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1935451 Killed "${NVMF_APP[@]}" "$@" 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1936290 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1936290 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1936290 ']' 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:31.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:31.324 11:10:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:31.324 [2024-11-28 11:10:09.969991] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:37:31.324 [2024-11-28 11:10:09.970088] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:31.324 [2024-11-28 11:10:10.157360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Write completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 Read completed with error (sct=0, sc=8) 00:37:31.324 starting I/O failed 00:37:31.324 [2024-11-28 11:10:10.174291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:31.583 [2024-11-28 11:10:10.263768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:31.583 [2024-11-28 11:10:10.263815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:31.583 [2024-11-28 11:10:10.263828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:31.583 [2024-11-28 11:10:10.263842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:31.583 [2024-11-28 11:10:10.263852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:31.583 [2024-11-28 11:10:10.266413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:31.583 [2024-11-28 11:10:10.266517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:31.583 [2024-11-28 11:10:10.266586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:31.583 [2024-11-28 11:10:10.266631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:32.152 Malloc0 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.152 11:10:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:32.152 [2024-11-28 11:10:10.920312] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7ff69dd16940) succeed. 00:37:32.152 [2024-11-28 11:10:10.930487] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7ff69cf1a940) succeed. 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Read completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.411 Write completed with error (sct=0, sc=8) 00:37:32.411 starting I/O failed 00:37:32.412 [2024-11-28 11:10:11.180032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:32.412 [2024-11-28 11:10:11.217237] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.412 11:10:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1935730 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Read completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 Write completed with error (sct=0, sc=8) 00:37:33.350 starting I/O failed 00:37:33.350 [2024-11-28 11:10:12.185616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.350 [2024-11-28 11:10:12.198613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.350 [2024-11-28 11:10:12.198698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.350 [2024-11-28 11:10:12.198732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.350 [2024-11-28 11:10:12.198749] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.350 [2024-11-28 11:10:12.198770] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.350 [2024-11-28 11:10:12.208415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-11-28 11:10:12.218317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.350 [2024-11-28 11:10:12.218393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.350 [2024-11-28 11:10:12.218419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.350 [2024-11-28 11:10:12.218436] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.350 [2024-11-28 11:10:12.218448] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.350 [2024-11-28 11:10:12.228388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.238298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.238368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.238395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.238409] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.238423] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.248649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.258217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.258288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.258313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.258329] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.258340] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.268398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.278326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.278395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.278422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.278436] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.278449] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.288505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.298395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.298472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.298497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.298513] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.298524] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.308557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.318434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.318505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.318538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.318552] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.318566] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.328623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.338423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.338489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.338514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.338530] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.338542] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.348755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.358650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.358715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.358744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.358758] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.358774] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.369041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.378644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.378709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.378734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.378750] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.378762] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.388782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.398669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.398731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.398758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.398772] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.398790] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.408980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.418778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.418845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.418869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.418887] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.418899] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.428950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.438868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.438927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.438954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.438968] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.438982] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.449088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.458911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.458978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.459002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.459019] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.611 [2024-11-28 11:10:12.459030] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.611 [2024-11-28 11:10:12.469098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.611 qpair failed and we were unable to recover it. 00:37:33.611 [2024-11-28 11:10:12.478907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.611 [2024-11-28 11:10:12.478967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.611 [2024-11-28 11:10:12.478994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.611 [2024-11-28 11:10:12.479008] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.612 [2024-11-28 11:10:12.479022] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.612 [2024-11-28 11:10:12.489314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.612 qpair failed and we were unable to recover it. 00:37:33.872 [2024-11-28 11:10:12.498931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.872 [2024-11-28 11:10:12.498995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.872 [2024-11-28 11:10:12.499020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.872 [2024-11-28 11:10:12.499038] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.872 [2024-11-28 11:10:12.499049] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.872 [2024-11-28 11:10:12.509145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.872 qpair failed and we were unable to recover it. 00:37:33.872 [2024-11-28 11:10:12.519034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.872 [2024-11-28 11:10:12.519097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.872 [2024-11-28 11:10:12.519124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.872 [2024-11-28 11:10:12.519137] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.872 [2024-11-28 11:10:12.519151] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.872 [2024-11-28 11:10:12.529520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.872 qpair failed and we were unable to recover it. 00:37:33.872 [2024-11-28 11:10:12.539101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.872 [2024-11-28 11:10:12.539168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.872 [2024-11-28 11:10:12.539192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.872 [2024-11-28 11:10:12.539208] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.872 [2024-11-28 11:10:12.539220] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.872 [2024-11-28 11:10:12.549413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.872 qpair failed and we were unable to recover it. 00:37:33.872 [2024-11-28 11:10:12.559095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.872 [2024-11-28 11:10:12.559154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.872 [2024-11-28 11:10:12.559182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.872 [2024-11-28 11:10:12.559196] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.872 [2024-11-28 11:10:12.559210] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.872 [2024-11-28 11:10:12.569546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.872 qpair failed and we were unable to recover it. 00:37:33.872 [2024-11-28 11:10:12.579163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.872 [2024-11-28 11:10:12.579236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.872 [2024-11-28 11:10:12.579261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.872 [2024-11-28 11:10:12.579276] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.872 [2024-11-28 11:10:12.579288] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.872 [2024-11-28 11:10:12.589425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.872 qpair failed and we were unable to recover it. 00:37:33.872 [2024-11-28 11:10:12.599226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.872 [2024-11-28 11:10:12.599292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.872 [2024-11-28 11:10:12.599319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.872 [2024-11-28 11:10:12.599333] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.872 [2024-11-28 11:10:12.599348] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.872 [2024-11-28 11:10:12.609662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.872 qpair failed and we were unable to recover it. 00:37:33.872 [2024-11-28 11:10:12.619349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.872 [2024-11-28 11:10:12.619415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.872 [2024-11-28 11:10:12.619439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.872 [2024-11-28 11:10:12.619455] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.872 [2024-11-28 11:10:12.619467] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.872 [2024-11-28 11:10:12.629640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.872 qpair failed and we were unable to recover it. 00:37:33.872 [2024-11-28 11:10:12.639357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.872 [2024-11-28 11:10:12.639455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.872 [2024-11-28 11:10:12.639484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.872 [2024-11-28 11:10:12.639498] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.872 [2024-11-28 11:10:12.639512] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.872 [2024-11-28 11:10:12.649748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.872 qpair failed and we were unable to recover it. 00:37:33.872 [2024-11-28 11:10:12.659518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.872 [2024-11-28 11:10:12.659583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.872 [2024-11-28 11:10:12.659616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.872 [2024-11-28 11:10:12.659633] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.872 [2024-11-28 11:10:12.659644] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.872 [2024-11-28 11:10:12.669571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.872 qpair failed and we were unable to recover it. 00:37:33.872 [2024-11-28 11:10:12.679490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.872 [2024-11-28 11:10:12.679547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.872 [2024-11-28 11:10:12.679574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.872 [2024-11-28 11:10:12.679588] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.872 [2024-11-28 11:10:12.679604] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.872 [2024-11-28 11:10:12.689746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.873 qpair failed and we were unable to recover it. 00:37:33.873 [2024-11-28 11:10:12.699561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.873 [2024-11-28 11:10:12.699627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.873 [2024-11-28 11:10:12.699652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.873 [2024-11-28 11:10:12.699668] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.873 [2024-11-28 11:10:12.699679] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.873 [2024-11-28 11:10:12.709905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.873 qpair failed and we were unable to recover it. 00:37:33.873 [2024-11-28 11:10:12.719597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.873 [2024-11-28 11:10:12.719659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.873 [2024-11-28 11:10:12.719686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.873 [2024-11-28 11:10:12.719700] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.873 [2024-11-28 11:10:12.719713] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.873 [2024-11-28 11:10:12.730009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.873 qpair failed and we were unable to recover it. 00:37:33.873 [2024-11-28 11:10:12.739644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:33.873 [2024-11-28 11:10:12.739708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:33.873 [2024-11-28 11:10:12.739732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:33.873 [2024-11-28 11:10:12.739748] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:33.873 [2024-11-28 11:10:12.739763] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:33.873 [2024-11-28 11:10:12.749817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:33.873 qpair failed and we were unable to recover it. 00:37:34.133 [2024-11-28 11:10:12.759716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.133 [2024-11-28 11:10:12.759777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.133 [2024-11-28 11:10:12.759804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.133 [2024-11-28 11:10:12.759818] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.133 [2024-11-28 11:10:12.759832] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.133 [2024-11-28 11:10:12.769981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.133 qpair failed and we were unable to recover it. 00:37:34.133 [2024-11-28 11:10:12.779905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.133 [2024-11-28 11:10:12.779970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.133 [2024-11-28 11:10:12.779994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.133 [2024-11-28 11:10:12.780010] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.133 [2024-11-28 11:10:12.780022] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.133 [2024-11-28 11:10:12.790178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.133 qpair failed and we were unable to recover it. 00:37:34.133 [2024-11-28 11:10:12.800060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.133 [2024-11-28 11:10:12.800117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.133 [2024-11-28 11:10:12.800144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.133 [2024-11-28 11:10:12.800158] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.133 [2024-11-28 11:10:12.800174] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.133 [2024-11-28 11:10:12.810581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.133 qpair failed and we were unable to recover it. 00:37:34.133 [2024-11-28 11:10:12.819953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.133 [2024-11-28 11:10:12.820021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.133 [2024-11-28 11:10:12.820046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.133 [2024-11-28 11:10:12.820064] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.133 [2024-11-28 11:10:12.820076] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.133 [2024-11-28 11:10:12.830379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.133 qpair failed and we were unable to recover it. 00:37:34.133 [2024-11-28 11:10:12.840153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.133 [2024-11-28 11:10:12.840219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.134 [2024-11-28 11:10:12.840245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.134 [2024-11-28 11:10:12.840259] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.134 [2024-11-28 11:10:12.840272] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.134 [2024-11-28 11:10:12.850396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.134 qpair failed and we were unable to recover it. 00:37:34.134 [2024-11-28 11:10:12.860171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.134 [2024-11-28 11:10:12.860240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.134 [2024-11-28 11:10:12.860265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.134 [2024-11-28 11:10:12.860281] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.134 [2024-11-28 11:10:12.860292] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.134 [2024-11-28 11:10:12.870709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.134 qpair failed and we were unable to recover it. 00:37:34.134 [2024-11-28 11:10:12.880232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.134 [2024-11-28 11:10:12.880296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.134 [2024-11-28 11:10:12.880323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.134 [2024-11-28 11:10:12.880337] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.134 [2024-11-28 11:10:12.880350] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.134 [2024-11-28 11:10:12.890382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.134 qpair failed and we were unable to recover it. 00:37:34.134 [2024-11-28 11:10:12.900372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.134 [2024-11-28 11:10:12.900439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.134 [2024-11-28 11:10:12.900464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.134 [2024-11-28 11:10:12.900480] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.134 [2024-11-28 11:10:12.900491] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.134 [2024-11-28 11:10:12.910583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.134 qpair failed and we were unable to recover it. 00:37:34.134 [2024-11-28 11:10:12.920429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.134 [2024-11-28 11:10:12.920497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.134 [2024-11-28 11:10:12.920526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.134 [2024-11-28 11:10:12.920539] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.134 [2024-11-28 11:10:12.920553] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.134 [2024-11-28 11:10:12.930667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.134 qpair failed and we were unable to recover it. 00:37:34.134 [2024-11-28 11:10:12.940431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.134 [2024-11-28 11:10:12.940502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.134 [2024-11-28 11:10:12.940526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.134 [2024-11-28 11:10:12.940541] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.134 [2024-11-28 11:10:12.940553] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.134 [2024-11-28 11:10:12.950858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.134 qpair failed and we were unable to recover it. 00:37:34.134 [2024-11-28 11:10:12.960496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.134 [2024-11-28 11:10:12.960561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.134 [2024-11-28 11:10:12.960591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.134 [2024-11-28 11:10:12.960604] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.134 [2024-11-28 11:10:12.960624] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.134 [2024-11-28 11:10:12.970827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.134 qpair failed and we were unable to recover it. 00:37:34.134 [2024-11-28 11:10:12.980449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.134 [2024-11-28 11:10:12.980512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.134 [2024-11-28 11:10:12.980536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.134 [2024-11-28 11:10:12.980552] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.134 [2024-11-28 11:10:12.980564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.134 [2024-11-28 11:10:12.991071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.134 qpair failed and we were unable to recover it. 00:37:34.134 [2024-11-28 11:10:13.000570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.134 [2024-11-28 11:10:13.000640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.134 [2024-11-28 11:10:13.000669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.134 [2024-11-28 11:10:13.000683] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.134 [2024-11-28 11:10:13.000698] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.134 [2024-11-28 11:10:13.010919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.134 qpair failed and we were unable to recover it. 00:37:34.395 [2024-11-28 11:10:13.020680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.395 [2024-11-28 11:10:13.020739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.395 [2024-11-28 11:10:13.020764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.395 [2024-11-28 11:10:13.020779] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.395 [2024-11-28 11:10:13.020790] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.395 [2024-11-28 11:10:13.031023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.395 qpair failed and we were unable to recover it. 00:37:34.395 [2024-11-28 11:10:13.040708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.395 [2024-11-28 11:10:13.040770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.395 [2024-11-28 11:10:13.040798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.395 [2024-11-28 11:10:13.040812] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.395 [2024-11-28 11:10:13.040825] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.395 [2024-11-28 11:10:13.050931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.395 qpair failed and we were unable to recover it. 00:37:34.395 [2024-11-28 11:10:13.060733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.395 [2024-11-28 11:10:13.060801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.395 [2024-11-28 11:10:13.060825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.395 [2024-11-28 11:10:13.060841] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.395 [2024-11-28 11:10:13.060852] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.395 [2024-11-28 11:10:13.071032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.395 qpair failed and we were unable to recover it. 00:37:34.395 [2024-11-28 11:10:13.080878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.395 [2024-11-28 11:10:13.080937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.395 [2024-11-28 11:10:13.080965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.395 [2024-11-28 11:10:13.080978] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.395 [2024-11-28 11:10:13.080996] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.395 [2024-11-28 11:10:13.090920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.395 qpair failed and we were unable to recover it. 00:37:34.395 [2024-11-28 11:10:13.100818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.395 [2024-11-28 11:10:13.100891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.395 [2024-11-28 11:10:13.100916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.395 [2024-11-28 11:10:13.100933] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.395 [2024-11-28 11:10:13.100944] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.395 [2024-11-28 11:10:13.111441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.395 qpair failed and we were unable to recover it. 00:37:34.395 [2024-11-28 11:10:13.121056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.395 [2024-11-28 11:10:13.121119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.395 [2024-11-28 11:10:13.121146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.395 [2024-11-28 11:10:13.121159] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.395 [2024-11-28 11:10:13.121173] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.395 [2024-11-28 11:10:13.131257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.395 qpair failed and we were unable to recover it. 00:37:34.395 [2024-11-28 11:10:13.140902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.395 [2024-11-28 11:10:13.140972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.395 [2024-11-28 11:10:13.140997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.395 [2024-11-28 11:10:13.141016] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.395 [2024-11-28 11:10:13.141031] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.395 [2024-11-28 11:10:13.151543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.395 qpair failed and we were unable to recover it. 00:37:34.395 [2024-11-28 11:10:13.161097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.395 [2024-11-28 11:10:13.161159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.395 [2024-11-28 11:10:13.161186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.396 [2024-11-28 11:10:13.161200] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.396 [2024-11-28 11:10:13.161213] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.396 [2024-11-28 11:10:13.171387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.396 qpair failed and we were unable to recover it. 00:37:34.396 [2024-11-28 11:10:13.181087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.396 [2024-11-28 11:10:13.181152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.396 [2024-11-28 11:10:13.181177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.396 [2024-11-28 11:10:13.181193] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.396 [2024-11-28 11:10:13.181204] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.396 [2024-11-28 11:10:13.191500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.396 qpair failed and we were unable to recover it. 00:37:34.396 [2024-11-28 11:10:13.202390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.396 [2024-11-28 11:10:13.202460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.396 [2024-11-28 11:10:13.202489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.396 [2024-11-28 11:10:13.202503] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.396 [2024-11-28 11:10:13.202517] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.396 [2024-11-28 11:10:13.211728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.396 qpair failed and we were unable to recover it. 00:37:34.396 [2024-11-28 11:10:13.221085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.396 [2024-11-28 11:10:13.221153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.396 [2024-11-28 11:10:13.221178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.396 [2024-11-28 11:10:13.221194] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.396 [2024-11-28 11:10:13.221206] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.396 [2024-11-28 11:10:13.231548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.396 qpair failed and we were unable to recover it. 00:37:34.396 [2024-11-28 11:10:13.241504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.396 [2024-11-28 11:10:13.241570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.396 [2024-11-28 11:10:13.241597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.396 [2024-11-28 11:10:13.241639] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.396 [2024-11-28 11:10:13.241654] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.396 [2024-11-28 11:10:13.251921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.396 qpair failed and we were unable to recover it. 00:37:34.396 [2024-11-28 11:10:13.261207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.396 [2024-11-28 11:10:13.261271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.396 [2024-11-28 11:10:13.261296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.396 [2024-11-28 11:10:13.261311] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.396 [2024-11-28 11:10:13.261322] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.396 [2024-11-28 11:10:13.271746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.396 qpair failed and we were unable to recover it. 00:37:34.656 [2024-11-28 11:10:13.281478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.656 [2024-11-28 11:10:13.281536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.656 [2024-11-28 11:10:13.281566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.656 [2024-11-28 11:10:13.281580] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.656 [2024-11-28 11:10:13.281595] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.656 [2024-11-28 11:10:13.291786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-11-28 11:10:13.301441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.656 [2024-11-28 11:10:13.301504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.656 [2024-11-28 11:10:13.301529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.656 [2024-11-28 11:10:13.301545] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.301556] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.657 [2024-11-28 11:10:13.311808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-11-28 11:10:13.321536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.657 [2024-11-28 11:10:13.321596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.657 [2024-11-28 11:10:13.321639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.657 [2024-11-28 11:10:13.321653] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.321670] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.657 [2024-11-28 11:10:13.331770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-11-28 11:10:13.341479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.657 [2024-11-28 11:10:13.341537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.657 [2024-11-28 11:10:13.341561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.657 [2024-11-28 11:10:13.341581] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.341592] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.657 [2024-11-28 11:10:13.353760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-11-28 11:10:13.361508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.657 [2024-11-28 11:10:13.361570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.657 [2024-11-28 11:10:13.361598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.657 [2024-11-28 11:10:13.361617] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.361631] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.657 [2024-11-28 11:10:13.371877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-11-28 11:10:13.381711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.657 [2024-11-28 11:10:13.381780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.657 [2024-11-28 11:10:13.381805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.657 [2024-11-28 11:10:13.381820] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.381832] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.657 [2024-11-28 11:10:13.392048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-11-28 11:10:13.401779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.657 [2024-11-28 11:10:13.401840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.657 [2024-11-28 11:10:13.401866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.657 [2024-11-28 11:10:13.401880] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.401894] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.657 [2024-11-28 11:10:13.412145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-11-28 11:10:13.421802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.657 [2024-11-28 11:10:13.421864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.657 [2024-11-28 11:10:13.421889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.657 [2024-11-28 11:10:13.421905] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.421917] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.657 [2024-11-28 11:10:13.431934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-11-28 11:10:13.441877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.657 [2024-11-28 11:10:13.441939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.657 [2024-11-28 11:10:13.441966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.657 [2024-11-28 11:10:13.441980] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.441994] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.657 [2024-11-28 11:10:13.452170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-11-28 11:10:13.461809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.657 [2024-11-28 11:10:13.461874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.657 [2024-11-28 11:10:13.461898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.657 [2024-11-28 11:10:13.461917] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.461928] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.657 [2024-11-28 11:10:13.472174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-11-28 11:10:13.482127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.657 [2024-11-28 11:10:13.482186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.657 [2024-11-28 11:10:13.482211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.657 [2024-11-28 11:10:13.482224] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.482236] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.657 [2024-11-28 11:10:13.492567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-11-28 11:10:13.503633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.657 [2024-11-28 11:10:13.503700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.657 [2024-11-28 11:10:13.503724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.657 [2024-11-28 11:10:13.503738] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.657 [2024-11-28 11:10:13.503749] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.658 [2024-11-28 11:10:13.512489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-11-28 11:10:13.521990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.658 [2024-11-28 11:10:13.522049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.658 [2024-11-28 11:10:13.522074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.658 [2024-11-28 11:10:13.522088] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.658 [2024-11-28 11:10:13.522101] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.658 [2024-11-28 11:10:13.532506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.918 [2024-11-28 11:10:13.542046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.918 [2024-11-28 11:10:13.542108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.918 [2024-11-28 11:10:13.542132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.918 [2024-11-28 11:10:13.542146] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.918 [2024-11-28 11:10:13.542158] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.918 [2024-11-28 11:10:13.552505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.918 qpair failed and we were unable to recover it. 00:37:34.918 [2024-11-28 11:10:13.562228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.918 [2024-11-28 11:10:13.562291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.918 [2024-11-28 11:10:13.562315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.918 [2024-11-28 11:10:13.562329] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.918 [2024-11-28 11:10:13.562341] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.918 [2024-11-28 11:10:13.572571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.918 qpair failed and we were unable to recover it. 00:37:34.918 [2024-11-28 11:10:13.582177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.918 [2024-11-28 11:10:13.582240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.918 [2024-11-28 11:10:13.582266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.918 [2024-11-28 11:10:13.582279] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.918 [2024-11-28 11:10:13.582291] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.918 [2024-11-28 11:10:13.592515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.918 qpair failed and we were unable to recover it. 00:37:34.918 [2024-11-28 11:10:13.602234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.918 [2024-11-28 11:10:13.602297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.918 [2024-11-28 11:10:13.602325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.918 [2024-11-28 11:10:13.602339] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.918 [2024-11-28 11:10:13.602351] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.918 [2024-11-28 11:10:13.612461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.918 qpair failed and we were unable to recover it. 00:37:34.919 [2024-11-28 11:10:13.622296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.919 [2024-11-28 11:10:13.622354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.919 [2024-11-28 11:10:13.622378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.919 [2024-11-28 11:10:13.622391] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.919 [2024-11-28 11:10:13.622402] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.919 [2024-11-28 11:10:13.632781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.919 qpair failed and we were unable to recover it. 00:37:34.919 [2024-11-28 11:10:13.642563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.919 [2024-11-28 11:10:13.642621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.919 [2024-11-28 11:10:13.642646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.919 [2024-11-28 11:10:13.642660] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.919 [2024-11-28 11:10:13.642671] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.919 [2024-11-28 11:10:13.657031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.919 qpair failed and we were unable to recover it. 00:37:34.919 [2024-11-28 11:10:13.662528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.919 [2024-11-28 11:10:13.662587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.919 [2024-11-28 11:10:13.662618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.919 [2024-11-28 11:10:13.662632] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.919 [2024-11-28 11:10:13.662644] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.919 [2024-11-28 11:10:13.672670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.919 qpair failed and we were unable to recover it. 00:37:34.919 [2024-11-28 11:10:13.682581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.919 [2024-11-28 11:10:13.682643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.919 [2024-11-28 11:10:13.682667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.919 [2024-11-28 11:10:13.682685] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.919 [2024-11-28 11:10:13.682696] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.919 [2024-11-28 11:10:13.692827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.919 qpair failed and we were unable to recover it. 00:37:34.919 [2024-11-28 11:10:13.702625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.919 [2024-11-28 11:10:13.702685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.919 [2024-11-28 11:10:13.702710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.919 [2024-11-28 11:10:13.702723] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.919 [2024-11-28 11:10:13.702735] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.919 [2024-11-28 11:10:13.712890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.919 qpair failed and we were unable to recover it. 00:37:34.919 [2024-11-28 11:10:13.722690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.919 [2024-11-28 11:10:13.722753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.919 [2024-11-28 11:10:13.722777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.919 [2024-11-28 11:10:13.722790] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.919 [2024-11-28 11:10:13.722802] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.919 [2024-11-28 11:10:13.733002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.919 qpair failed and we were unable to recover it. 00:37:34.919 [2024-11-28 11:10:13.742718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.919 [2024-11-28 11:10:13.742773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.919 [2024-11-28 11:10:13.742798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.919 [2024-11-28 11:10:13.742811] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.919 [2024-11-28 11:10:13.742822] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.919 [2024-11-28 11:10:13.753155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.919 qpair failed and we were unable to recover it. 00:37:34.919 [2024-11-28 11:10:13.762814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.919 [2024-11-28 11:10:13.762870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.919 [2024-11-28 11:10:13.762895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.919 [2024-11-28 11:10:13.762908] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.919 [2024-11-28 11:10:13.762920] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.919 [2024-11-28 11:10:13.773045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.919 qpair failed and we were unable to recover it. 00:37:34.919 [2024-11-28 11:10:13.782767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:34.919 [2024-11-28 11:10:13.782822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:34.919 [2024-11-28 11:10:13.782846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:34.919 [2024-11-28 11:10:13.782860] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:34.919 [2024-11-28 11:10:13.782871] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:34.919 [2024-11-28 11:10:13.793061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.919 qpair failed and we were unable to recover it. 00:37:35.180 [2024-11-28 11:10:13.802829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.180 [2024-11-28 11:10:13.802884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.180 [2024-11-28 11:10:13.802909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.180 [2024-11-28 11:10:13.802922] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.180 [2024-11-28 11:10:13.802934] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.180 [2024-11-28 11:10:13.813281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.180 qpair failed and we were unable to recover it. 00:37:35.180 [2024-11-28 11:10:13.822820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.180 [2024-11-28 11:10:13.822882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.180 [2024-11-28 11:10:13.822906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.180 [2024-11-28 11:10:13.822920] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.180 [2024-11-28 11:10:13.822931] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.180 [2024-11-28 11:10:13.833221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.180 qpair failed and we were unable to recover it. 00:37:35.180 [2024-11-28 11:10:13.843048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.180 [2024-11-28 11:10:13.843109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.180 [2024-11-28 11:10:13.843133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.180 [2024-11-28 11:10:13.843147] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.180 [2024-11-28 11:10:13.843158] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.180 [2024-11-28 11:10:13.853166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.180 qpair failed and we were unable to recover it. 00:37:35.180 [2024-11-28 11:10:13.863042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.180 [2024-11-28 11:10:13.863103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.180 [2024-11-28 11:10:13.863127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.180 [2024-11-28 11:10:13.863141] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.180 [2024-11-28 11:10:13.863152] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.180 [2024-11-28 11:10:13.873230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.180 qpair failed and we were unable to recover it. 00:37:35.180 [2024-11-28 11:10:13.883009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.180 [2024-11-28 11:10:13.883061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.180 [2024-11-28 11:10:13.883085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.180 [2024-11-28 11:10:13.883099] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.180 [2024-11-28 11:10:13.883110] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.180 [2024-11-28 11:10:13.893656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.180 qpair failed and we were unable to recover it. 00:37:35.180 [2024-11-28 11:10:13.903105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.180 [2024-11-28 11:10:13.903167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.180 [2024-11-28 11:10:13.903192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.180 [2024-11-28 11:10:13.903205] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.180 [2024-11-28 11:10:13.903216] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.180 [2024-11-28 11:10:13.913421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.180 qpair failed and we were unable to recover it. 00:37:35.180 [2024-11-28 11:10:13.923266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.180 [2024-11-28 11:10:13.923318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.180 [2024-11-28 11:10:13.923343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.180 [2024-11-28 11:10:13.923357] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.180 [2024-11-28 11:10:13.923368] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.180 [2024-11-28 11:10:13.933453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.180 qpair failed and we were unable to recover it. 00:37:35.180 [2024-11-28 11:10:13.943268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.180 [2024-11-28 11:10:13.943322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.180 [2024-11-28 11:10:13.943351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.180 [2024-11-28 11:10:13.943365] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.180 [2024-11-28 11:10:13.943376] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.180 [2024-11-28 11:10:13.955658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.180 qpair failed and we were unable to recover it. 00:37:35.180 [2024-11-28 11:10:13.963276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.180 [2024-11-28 11:10:13.963329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.180 [2024-11-28 11:10:13.963353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.180 [2024-11-28 11:10:13.963367] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.180 [2024-11-28 11:10:13.963379] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.180 [2024-11-28 11:10:13.973724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.180 qpair failed and we were unable to recover it. 00:37:35.180 [2024-11-28 11:10:13.983425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.181 [2024-11-28 11:10:13.983481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.181 [2024-11-28 11:10:13.983506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.181 [2024-11-28 11:10:13.983519] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.181 [2024-11-28 11:10:13.983531] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.181 [2024-11-28 11:10:13.993710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.181 qpair failed and we were unable to recover it. 00:37:35.181 [2024-11-28 11:10:14.003457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.181 [2024-11-28 11:10:14.003513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.181 [2024-11-28 11:10:14.003538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.181 [2024-11-28 11:10:14.003551] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.181 [2024-11-28 11:10:14.003563] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.181 [2024-11-28 11:10:14.013513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.181 qpair failed and we were unable to recover it. 00:37:35.181 [2024-11-28 11:10:14.023427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.181 [2024-11-28 11:10:14.023487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.181 [2024-11-28 11:10:14.023512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.181 [2024-11-28 11:10:14.023529] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.181 [2024-11-28 11:10:14.023540] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.181 [2024-11-28 11:10:14.033639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.181 qpair failed and we were unable to recover it. 00:37:35.181 [2024-11-28 11:10:14.043628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.181 [2024-11-28 11:10:14.043686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.181 [2024-11-28 11:10:14.043710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.181 [2024-11-28 11:10:14.043723] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.181 [2024-11-28 11:10:14.043735] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.181 [2024-11-28 11:10:14.053777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.181 qpair failed and we were unable to recover it. 00:37:35.441 [2024-11-28 11:10:14.063565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.441 [2024-11-28 11:10:14.063632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.441 [2024-11-28 11:10:14.063657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.441 [2024-11-28 11:10:14.063671] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.441 [2024-11-28 11:10:14.063683] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.441 [2024-11-28 11:10:14.073861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.441 qpair failed and we were unable to recover it. 00:37:35.441 [2024-11-28 11:10:14.083781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.083838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.083863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.083876] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.083888] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.094045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.103820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.103879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.103903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.103917] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.103928] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.113883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.123853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.123912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.123937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.123950] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.123962] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.133885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.143884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.143943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.143968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.143982] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.143993] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.154151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.163887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.163950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.163975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.163988] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.164000] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.174186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.183946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.184010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.184035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.184048] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.184060] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.194126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.204124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.204185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.204211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.204224] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.204235] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.214468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.224167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.224229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.224253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.224267] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.224278] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.234305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.244100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.244157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.244182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.244195] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.244207] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.254336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.264186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.264244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.264268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.264281] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.264293] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.274318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.284382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.284441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.284470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.284484] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.284495] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.294735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.442 [2024-11-28 11:10:14.304220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.442 [2024-11-28 11:10:14.304277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.442 [2024-11-28 11:10:14.304302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.442 [2024-11-28 11:10:14.304316] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.442 [2024-11-28 11:10:14.304327] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.442 [2024-11-28 11:10:14.314622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.442 qpair failed and we were unable to recover it. 00:37:35.703 [2024-11-28 11:10:14.324421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.703 [2024-11-28 11:10:14.324480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.703 [2024-11-28 11:10:14.324505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.703 [2024-11-28 11:10:14.324518] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.703 [2024-11-28 11:10:14.324530] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.703 [2024-11-28 11:10:14.334676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.703 qpair failed and we were unable to recover it. 00:37:35.703 [2024-11-28 11:10:14.344435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.703 [2024-11-28 11:10:14.344493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.703 [2024-11-28 11:10:14.344517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.703 [2024-11-28 11:10:14.344531] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.703 [2024-11-28 11:10:14.344542] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.703 [2024-11-28 11:10:14.354942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.703 qpair failed and we were unable to recover it. 00:37:35.703 [2024-11-28 11:10:14.364592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.703 [2024-11-28 11:10:14.364653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.703 [2024-11-28 11:10:14.364678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.703 [2024-11-28 11:10:14.364692] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.703 [2024-11-28 11:10:14.364707] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.374884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.704 [2024-11-28 11:10:14.384696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.704 [2024-11-28 11:10:14.384765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.704 [2024-11-28 11:10:14.384789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.704 [2024-11-28 11:10:14.384802] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.704 [2024-11-28 11:10:14.384814] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.394862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.704 [2024-11-28 11:10:14.404615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.704 [2024-11-28 11:10:14.404671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.704 [2024-11-28 11:10:14.404696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.704 [2024-11-28 11:10:14.404710] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.704 [2024-11-28 11:10:14.404721] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.414898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.704 [2024-11-28 11:10:14.424761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.704 [2024-11-28 11:10:14.424818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.704 [2024-11-28 11:10:14.424842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.704 [2024-11-28 11:10:14.424856] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.704 [2024-11-28 11:10:14.424867] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.434871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.704 [2024-11-28 11:10:14.444862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.704 [2024-11-28 11:10:14.444925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.704 [2024-11-28 11:10:14.444949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.704 [2024-11-28 11:10:14.444963] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.704 [2024-11-28 11:10:14.444974] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.455117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.704 [2024-11-28 11:10:14.464803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.704 [2024-11-28 11:10:14.464865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.704 [2024-11-28 11:10:14.464889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.704 [2024-11-28 11:10:14.464902] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.704 [2024-11-28 11:10:14.464914] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.475154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.704 [2024-11-28 11:10:14.484905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.704 [2024-11-28 11:10:14.484963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.704 [2024-11-28 11:10:14.484988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.704 [2024-11-28 11:10:14.485001] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.704 [2024-11-28 11:10:14.485012] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.495295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.704 [2024-11-28 11:10:14.505033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.704 [2024-11-28 11:10:14.505093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.704 [2024-11-28 11:10:14.505118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.704 [2024-11-28 11:10:14.505131] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.704 [2024-11-28 11:10:14.505142] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.515128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.704 [2024-11-28 11:10:14.525058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.704 [2024-11-28 11:10:14.525115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.704 [2024-11-28 11:10:14.525139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.704 [2024-11-28 11:10:14.525152] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.704 [2024-11-28 11:10:14.525164] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.535695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.704 [2024-11-28 11:10:14.545130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.704 [2024-11-28 11:10:14.545189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.704 [2024-11-28 11:10:14.545213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.704 [2024-11-28 11:10:14.545227] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.704 [2024-11-28 11:10:14.545238] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.555599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.704 [2024-11-28 11:10:14.565204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.704 [2024-11-28 11:10:14.565259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.704 [2024-11-28 11:10:14.565284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.704 [2024-11-28 11:10:14.565297] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.704 [2024-11-28 11:10:14.565308] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.704 [2024-11-28 11:10:14.575503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.704 qpair failed and we were unable to recover it. 00:37:35.964 [2024-11-28 11:10:14.585300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.965 [2024-11-28 11:10:14.585361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.965 [2024-11-28 11:10:14.585385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.965 [2024-11-28 11:10:14.585399] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.965 [2024-11-28 11:10:14.585410] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.965 [2024-11-28 11:10:14.595593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.965 qpair failed and we were unable to recover it. 00:37:35.965 [2024-11-28 11:10:14.605286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.965 [2024-11-28 11:10:14.605338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.965 [2024-11-28 11:10:14.605363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.965 [2024-11-28 11:10:14.605377] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.965 [2024-11-28 11:10:14.605388] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.965 [2024-11-28 11:10:14.615464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.965 qpair failed and we were unable to recover it. 00:37:35.965 [2024-11-28 11:10:14.625351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.965 [2024-11-28 11:10:14.625412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.965 [2024-11-28 11:10:14.625441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.965 [2024-11-28 11:10:14.625454] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.965 [2024-11-28 11:10:14.625465] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.965 [2024-11-28 11:10:14.635649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.965 qpair failed and we were unable to recover it. 00:37:35.965 [2024-11-28 11:10:14.645423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.965 [2024-11-28 11:10:14.645477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.965 [2024-11-28 11:10:14.645501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.965 [2024-11-28 11:10:14.645515] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.965 [2024-11-28 11:10:14.645526] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.965 [2024-11-28 11:10:14.655678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.965 qpair failed and we were unable to recover it. 00:37:35.965 [2024-11-28 11:10:14.665397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.965 [2024-11-28 11:10:14.665455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.965 [2024-11-28 11:10:14.665479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.965 [2024-11-28 11:10:14.665493] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.965 [2024-11-28 11:10:14.665505] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.965 [2024-11-28 11:10:14.675880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.965 qpair failed and we were unable to recover it. 00:37:35.965 [2024-11-28 11:10:14.685578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.965 [2024-11-28 11:10:14.685645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.965 [2024-11-28 11:10:14.685669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.965 [2024-11-28 11:10:14.685683] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.965 [2024-11-28 11:10:14.685694] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.965 [2024-11-28 11:10:14.695736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.965 qpair failed and we were unable to recover it. 00:37:35.965 [2024-11-28 11:10:14.705642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.965 [2024-11-28 11:10:14.705705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.965 [2024-11-28 11:10:14.705730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.965 [2024-11-28 11:10:14.705744] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.965 [2024-11-28 11:10:14.705759] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.965 [2024-11-28 11:10:14.715793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.965 qpair failed and we were unable to recover it. 00:37:35.965 [2024-11-28 11:10:14.725613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.965 [2024-11-28 11:10:14.725667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.965 [2024-11-28 11:10:14.725693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.965 [2024-11-28 11:10:14.725707] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.965 [2024-11-28 11:10:14.725719] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.965 [2024-11-28 11:10:14.736129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.965 qpair failed and we were unable to recover it. 00:37:35.965 [2024-11-28 11:10:14.745710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.965 [2024-11-28 11:10:14.745770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.965 [2024-11-28 11:10:14.745794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.965 [2024-11-28 11:10:14.745809] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.965 [2024-11-28 11:10:14.745820] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.965 [2024-11-28 11:10:14.755911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.965 qpair failed and we were unable to recover it. 00:37:35.965 [2024-11-28 11:10:14.765836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.965 [2024-11-28 11:10:14.765906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.965 [2024-11-28 11:10:14.765931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.965 [2024-11-28 11:10:14.765944] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.965 [2024-11-28 11:10:14.765956] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.965 [2024-11-28 11:10:14.775901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.965 qpair failed and we were unable to recover it. 00:37:35.966 [2024-11-28 11:10:14.785860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.966 [2024-11-28 11:10:14.785926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.966 [2024-11-28 11:10:14.785950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.966 [2024-11-28 11:10:14.785963] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.966 [2024-11-28 11:10:14.785974] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.966 [2024-11-28 11:10:14.796184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.966 qpair failed and we were unable to recover it. 00:37:35.966 [2024-11-28 11:10:14.805864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.966 [2024-11-28 11:10:14.805924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.966 [2024-11-28 11:10:14.805949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.966 [2024-11-28 11:10:14.805963] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.966 [2024-11-28 11:10:14.805974] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.966 [2024-11-28 11:10:14.816144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.966 qpair failed and we were unable to recover it. 00:37:35.966 [2024-11-28 11:10:14.825971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:35.966 [2024-11-28 11:10:14.826032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:35.966 [2024-11-28 11:10:14.826057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:35.966 [2024-11-28 11:10:14.826071] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:35.966 [2024-11-28 11:10:14.826082] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:35.966 [2024-11-28 11:10:14.836048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:35.966 qpair failed and we were unable to recover it. 00:37:36.226 [2024-11-28 11:10:14.848598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.226 [2024-11-28 11:10:14.848670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.226 [2024-11-28 11:10:14.848695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.226 [2024-11-28 11:10:14.848709] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.226 [2024-11-28 11:10:14.848720] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.226 [2024-11-28 11:10:14.856224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.226 qpair failed and we were unable to recover it. 00:37:36.226 [2024-11-28 11:10:14.866198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.226 [2024-11-28 11:10:14.866254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.226 [2024-11-28 11:10:14.866279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.226 [2024-11-28 11:10:14.866293] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.226 [2024-11-28 11:10:14.866304] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.226 [2024-11-28 11:10:14.876404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.226 qpair failed and we were unable to recover it. 00:37:36.226 [2024-11-28 11:10:14.886196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.226 [2024-11-28 11:10:14.886263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.226 [2024-11-28 11:10:14.886288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.226 [2024-11-28 11:10:14.886301] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.226 [2024-11-28 11:10:14.886313] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.226 [2024-11-28 11:10:14.896329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.226 qpair failed and we were unable to recover it. 00:37:36.226 [2024-11-28 11:10:14.906264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.226 [2024-11-28 11:10:14.906322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.226 [2024-11-28 11:10:14.906347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.226 [2024-11-28 11:10:14.906361] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.226 [2024-11-28 11:10:14.906372] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.226 [2024-11-28 11:10:14.916587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.226 qpair failed and we were unable to recover it. 00:37:36.226 [2024-11-28 11:10:14.926245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.226 [2024-11-28 11:10:14.926305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.226 [2024-11-28 11:10:14.926330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.226 [2024-11-28 11:10:14.926344] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.226 [2024-11-28 11:10:14.926356] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.226 [2024-11-28 11:10:14.936492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.226 qpair failed and we were unable to recover it. 00:37:36.226 [2024-11-28 11:10:14.946354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.227 [2024-11-28 11:10:14.946413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.227 [2024-11-28 11:10:14.946438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.227 [2024-11-28 11:10:14.946451] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.227 [2024-11-28 11:10:14.946462] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.227 [2024-11-28 11:10:14.956493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.227 qpair failed and we were unable to recover it. 00:37:36.227 [2024-11-28 11:10:14.966402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.227 [2024-11-28 11:10:14.966459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.227 [2024-11-28 11:10:14.966484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.227 [2024-11-28 11:10:14.966502] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.227 [2024-11-28 11:10:14.966513] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.227 [2024-11-28 11:10:14.976514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.227 qpair failed and we were unable to recover it. 00:37:36.227 [2024-11-28 11:10:14.986533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.227 [2024-11-28 11:10:14.986591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.227 [2024-11-28 11:10:14.986621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.227 [2024-11-28 11:10:14.986634] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.227 [2024-11-28 11:10:14.986645] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.227 [2024-11-28 11:10:14.999065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.227 qpair failed and we were unable to recover it. 00:37:36.227 [2024-11-28 11:10:15.006429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.227 [2024-11-28 11:10:15.006491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.227 [2024-11-28 11:10:15.006515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.227 [2024-11-28 11:10:15.006528] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.227 [2024-11-28 11:10:15.006540] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.227 [2024-11-28 11:10:15.016743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.227 qpair failed and we were unable to recover it. 00:37:36.227 [2024-11-28 11:10:15.026635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.227 [2024-11-28 11:10:15.026692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.227 [2024-11-28 11:10:15.026717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.227 [2024-11-28 11:10:15.026731] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.227 [2024-11-28 11:10:15.026742] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.227 [2024-11-28 11:10:15.036662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.227 qpair failed and we were unable to recover it. 00:37:36.227 [2024-11-28 11:10:15.046619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.227 [2024-11-28 11:10:15.046682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.227 [2024-11-28 11:10:15.046706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.227 [2024-11-28 11:10:15.046720] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.227 [2024-11-28 11:10:15.046735] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.227 [2024-11-28 11:10:15.056992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.227 qpair failed and we were unable to recover it. 00:37:36.227 [2024-11-28 11:10:15.066740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.227 [2024-11-28 11:10:15.066799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.227 [2024-11-28 11:10:15.066823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.227 [2024-11-28 11:10:15.066838] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.227 [2024-11-28 11:10:15.066849] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.227 [2024-11-28 11:10:15.077046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.227 qpair failed and we were unable to recover it. 00:37:36.227 [2024-11-28 11:10:15.086721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.227 [2024-11-28 11:10:15.086787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.227 [2024-11-28 11:10:15.086811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.227 [2024-11-28 11:10:15.086824] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.227 [2024-11-28 11:10:15.086836] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.227 [2024-11-28 11:10:15.097115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.227 qpair failed and we were unable to recover it. 00:37:36.485 [2024-11-28 11:10:15.106882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.485 [2024-11-28 11:10:15.106951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.485 [2024-11-28 11:10:15.106975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.485 [2024-11-28 11:10:15.106989] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.107001] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.116954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.126857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.126916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.126940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.126953] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.126965] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.136900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.148417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.148487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.148511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.148524] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.148536] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.157016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.167073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.167133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.167158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.167171] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.167183] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.177725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.187051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.187111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.187135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.187148] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.187159] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.197327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.206971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.207045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.207071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.207086] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.207097] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.217310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.227060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.227121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.227150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.227164] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.227175] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.237517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.247113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.247171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.247195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.247209] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.247220] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.257413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.267112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.267167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.267191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.267205] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.267216] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.277386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.287348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.287407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.287432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.287445] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.287457] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.298223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.307342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.307398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.307423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.307441] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.307452] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.317611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.327417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.327476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.327501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.327514] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.327526] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.337669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.486 [2024-11-28 11:10:15.347419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.486 [2024-11-28 11:10:15.347476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.486 [2024-11-28 11:10:15.347501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.486 [2024-11-28 11:10:15.347515] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.486 [2024-11-28 11:10:15.347525] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.486 [2024-11-28 11:10:15.357699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.486 qpair failed and we were unable to recover it. 00:37:36.746 [2024-11-28 11:10:15.367541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.746 [2024-11-28 11:10:15.367592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.746 [2024-11-28 11:10:15.367624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.746 [2024-11-28 11:10:15.367638] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.746 [2024-11-28 11:10:15.367649] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.746 [2024-11-28 11:10:15.377820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.746 qpair failed and we were unable to recover it. 00:37:36.746 [2024-11-28 11:10:15.387791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.746 [2024-11-28 11:10:15.387846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.746 [2024-11-28 11:10:15.387871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.746 [2024-11-28 11:10:15.387885] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.746 [2024-11-28 11:10:15.387897] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.746 [2024-11-28 11:10:15.397824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.746 qpair failed and we were unable to recover it. 00:37:36.746 [2024-11-28 11:10:15.407700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.746 [2024-11-28 11:10:15.407765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.746 [2024-11-28 11:10:15.407790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.746 [2024-11-28 11:10:15.407803] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.746 [2024-11-28 11:10:15.407815] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.746 [2024-11-28 11:10:15.418068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.746 qpair failed and we were unable to recover it. 00:37:36.746 [2024-11-28 11:10:15.427715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.746 [2024-11-28 11:10:15.427778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.746 [2024-11-28 11:10:15.427803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.746 [2024-11-28 11:10:15.427817] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.746 [2024-11-28 11:10:15.427828] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.746 [2024-11-28 11:10:15.438034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.746 qpair failed and we were unable to recover it. 00:37:36.746 [2024-11-28 11:10:15.447847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.746 [2024-11-28 11:10:15.447910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.746 [2024-11-28 11:10:15.447936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.746 [2024-11-28 11:10:15.447949] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.746 [2024-11-28 11:10:15.447960] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.746 [2024-11-28 11:10:15.458236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.746 qpair failed and we were unable to recover it. 00:37:36.746 [2024-11-28 11:10:15.467911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.746 [2024-11-28 11:10:15.467969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.746 [2024-11-28 11:10:15.467993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.746 [2024-11-28 11:10:15.468006] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.746 [2024-11-28 11:10:15.468018] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.746 [2024-11-28 11:10:15.478277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.746 qpair failed and we were unable to recover it. 00:37:36.746 [2024-11-28 11:10:15.488026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.747 [2024-11-28 11:10:15.488087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.747 [2024-11-28 11:10:15.488111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.747 [2024-11-28 11:10:15.488125] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.747 [2024-11-28 11:10:15.488136] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.747 [2024-11-28 11:10:15.498379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.747 qpair failed and we were unable to recover it. 00:37:36.747 [2024-11-28 11:10:15.508111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.747 [2024-11-28 11:10:15.508176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.747 [2024-11-28 11:10:15.508201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.747 [2024-11-28 11:10:15.508215] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.747 [2024-11-28 11:10:15.508226] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.747 [2024-11-28 11:10:15.518419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.747 qpair failed and we were unable to recover it. 00:37:36.747 [2024-11-28 11:10:15.528160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.747 [2024-11-28 11:10:15.528228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.747 [2024-11-28 11:10:15.528253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.747 [2024-11-28 11:10:15.528267] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.747 [2024-11-28 11:10:15.528278] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.747 [2024-11-28 11:10:15.538387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.747 qpair failed and we were unable to recover it. 00:37:36.747 [2024-11-28 11:10:15.548228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.747 [2024-11-28 11:10:15.548284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.747 [2024-11-28 11:10:15.548309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.747 [2024-11-28 11:10:15.548322] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.747 [2024-11-28 11:10:15.548333] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.747 [2024-11-28 11:10:15.558537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.747 qpair failed and we were unable to recover it. 00:37:36.747 [2024-11-28 11:10:15.568258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.747 [2024-11-28 11:10:15.568317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.747 [2024-11-28 11:10:15.568345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.747 [2024-11-28 11:10:15.568360] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.747 [2024-11-28 11:10:15.568371] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.747 [2024-11-28 11:10:15.578620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.747 qpair failed and we were unable to recover it. 00:37:36.747 [2024-11-28 11:10:15.588348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.747 [2024-11-28 11:10:15.588401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.747 [2024-11-28 11:10:15.588425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.747 [2024-11-28 11:10:15.588439] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.747 [2024-11-28 11:10:15.588450] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.747 [2024-11-28 11:10:15.598669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.747 qpair failed and we were unable to recover it. 00:37:36.747 [2024-11-28 11:10:15.608393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:36.747 [2024-11-28 11:10:15.608452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:36.747 [2024-11-28 11:10:15.608476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:36.747 [2024-11-28 11:10:15.608490] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:36.747 [2024-11-28 11:10:15.608502] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:36.747 [2024-11-28 11:10:15.618686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.747 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-28 11:10:15.628490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.007 [2024-11-28 11:10:15.628551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.007 [2024-11-28 11:10:15.628576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.007 [2024-11-28 11:10:15.628590] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.007 [2024-11-28 11:10:15.628601] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.007 [2024-11-28 11:10:15.638652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-28 11:10:15.648472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.007 [2024-11-28 11:10:15.648534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.007 [2024-11-28 11:10:15.648559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.007 [2024-11-28 11:10:15.648577] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.007 [2024-11-28 11:10:15.648588] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.007 [2024-11-28 11:10:15.658830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-28 11:10:15.668519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.007 [2024-11-28 11:10:15.668577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.007 [2024-11-28 11:10:15.668602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.007 [2024-11-28 11:10:15.668622] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.007 [2024-11-28 11:10:15.668633] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.007 [2024-11-28 11:10:15.678824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-28 11:10:15.688547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.007 [2024-11-28 11:10:15.688603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.007 [2024-11-28 11:10:15.688633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.007 [2024-11-28 11:10:15.688646] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.007 [2024-11-28 11:10:15.688658] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.007 [2024-11-28 11:10:15.699162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-28 11:10:15.708617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.007 [2024-11-28 11:10:15.708676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.007 [2024-11-28 11:10:15.708700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.007 [2024-11-28 11:10:15.708714] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.007 [2024-11-28 11:10:15.708726] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.007 [2024-11-28 11:10:15.718981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-28 11:10:15.728636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.007 [2024-11-28 11:10:15.728699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.007 [2024-11-28 11:10:15.728723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.007 [2024-11-28 11:10:15.728736] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.007 [2024-11-28 11:10:15.728748] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.007 [2024-11-28 11:10:15.739002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-28 11:10:15.748878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.007 [2024-11-28 11:10:15.748942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.007 [2024-11-28 11:10:15.748967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.008 [2024-11-28 11:10:15.748980] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.008 [2024-11-28 11:10:15.748992] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.008 [2024-11-28 11:10:15.759353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-28 11:10:15.768796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.008 [2024-11-28 11:10:15.768854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.008 [2024-11-28 11:10:15.768879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.008 [2024-11-28 11:10:15.768892] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.008 [2024-11-28 11:10:15.768903] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.008 [2024-11-28 11:10:15.779230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-28 11:10:15.789074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.008 [2024-11-28 11:10:15.789136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.008 [2024-11-28 11:10:15.789160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.008 [2024-11-28 11:10:15.789174] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.008 [2024-11-28 11:10:15.789185] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.008 [2024-11-28 11:10:15.799014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-28 11:10:15.808926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.008 [2024-11-28 11:10:15.808984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.008 [2024-11-28 11:10:15.809010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.008 [2024-11-28 11:10:15.809024] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.008 [2024-11-28 11:10:15.809036] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.008 [2024-11-28 11:10:15.819997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-28 11:10:15.829099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.008 [2024-11-28 11:10:15.829156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.008 [2024-11-28 11:10:15.829181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.008 [2024-11-28 11:10:15.829194] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.008 [2024-11-28 11:10:15.829206] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.008 [2024-11-28 11:10:15.839281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-28 11:10:15.849094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.008 [2024-11-28 11:10:15.849145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.008 [2024-11-28 11:10:15.849170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.008 [2024-11-28 11:10:15.849184] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.008 [2024-11-28 11:10:15.849196] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.008 [2024-11-28 11:10:15.859387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-28 11:10:15.869153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.008 [2024-11-28 11:10:15.869210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.008 [2024-11-28 11:10:15.869234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.008 [2024-11-28 11:10:15.869247] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.008 [2024-11-28 11:10:15.869259] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.008 [2024-11-28 11:10:15.879520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.268 [2024-11-28 11:10:15.889120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.268 [2024-11-28 11:10:15.889179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.268 [2024-11-28 11:10:15.889203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.268 [2024-11-28 11:10:15.889216] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.268 [2024-11-28 11:10:15.889228] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.268 [2024-11-28 11:10:15.899588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.268 qpair failed and we were unable to recover it. 00:37:37.268 [2024-11-28 11:10:15.909303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.268 [2024-11-28 11:10:15.909361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.268 [2024-11-28 11:10:15.909390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.268 [2024-11-28 11:10:15.909403] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.268 [2024-11-28 11:10:15.909415] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.268 [2024-11-28 11:10:15.919628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.268 qpair failed and we were unable to recover it. 00:37:37.268 [2024-11-28 11:10:15.929415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.268 [2024-11-28 11:10:15.929467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.268 [2024-11-28 11:10:15.929493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:15.929506] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:15.929518] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:15.939617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.269 [2024-11-28 11:10:15.949418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.269 [2024-11-28 11:10:15.949477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.269 [2024-11-28 11:10:15.949501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:15.949515] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:15.949526] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:15.959461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.269 [2024-11-28 11:10:15.969463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.269 [2024-11-28 11:10:15.969517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.269 [2024-11-28 11:10:15.969541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:15.969555] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:15.969567] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:15.979860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.269 [2024-11-28 11:10:15.989460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.269 [2024-11-28 11:10:15.989513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.269 [2024-11-28 11:10:15.989537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:15.989552] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:15.989567] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:15.999748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.269 [2024-11-28 11:10:16.009544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.269 [2024-11-28 11:10:16.009603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.269 [2024-11-28 11:10:16.009632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:16.009646] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:16.009658] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:16.019887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.269 [2024-11-28 11:10:16.029542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.269 [2024-11-28 11:10:16.029599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.269 [2024-11-28 11:10:16.029629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:16.029643] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:16.029654] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:16.039901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.269 [2024-11-28 11:10:16.049738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.269 [2024-11-28 11:10:16.049797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.269 [2024-11-28 11:10:16.049821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:16.049835] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:16.049847] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:16.060138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.269 [2024-11-28 11:10:16.069754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.269 [2024-11-28 11:10:16.069814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.269 [2024-11-28 11:10:16.069838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:16.069852] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:16.069863] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:16.079932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.269 [2024-11-28 11:10:16.089765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.269 [2024-11-28 11:10:16.089825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.269 [2024-11-28 11:10:16.089850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:16.089863] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:16.089874] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:16.100464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.269 [2024-11-28 11:10:16.109838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.269 [2024-11-28 11:10:16.109897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.269 [2024-11-28 11:10:16.109921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:16.109934] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:16.109946] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:16.120341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.269 [2024-11-28 11:10:16.129941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.269 [2024-11-28 11:10:16.129996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.269 [2024-11-28 11:10:16.130020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.269 [2024-11-28 11:10:16.130033] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.269 [2024-11-28 11:10:16.130045] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.269 [2024-11-28 11:10:16.140359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.269 qpair failed and we were unable to recover it. 00:37:37.530 [2024-11-28 11:10:16.149985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.530 [2024-11-28 11:10:16.150043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.530 [2024-11-28 11:10:16.150067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.530 [2024-11-28 11:10:16.150081] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.530 [2024-11-28 11:10:16.150093] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.530 [2024-11-28 11:10:16.160546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.530 qpair failed and we were unable to recover it. 00:37:37.530 [2024-11-28 11:10:16.170070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.530 [2024-11-28 11:10:16.170138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.530 [2024-11-28 11:10:16.170162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.530 [2024-11-28 11:10:16.170176] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.530 [2024-11-28 11:10:16.170187] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.530 [2024-11-28 11:10:16.180462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.530 qpair failed and we were unable to recover it. 00:37:37.530 [2024-11-28 11:10:16.190118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.530 [2024-11-28 11:10:16.190179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.530 [2024-11-28 11:10:16.190203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.530 [2024-11-28 11:10:16.190216] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.530 [2024-11-28 11:10:16.190228] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.530 [2024-11-28 11:10:16.200257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.530 qpair failed and we were unable to recover it. 00:37:37.530 [2024-11-28 11:10:16.210232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.530 [2024-11-28 11:10:16.210293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.530 [2024-11-28 11:10:16.210319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.530 [2024-11-28 11:10:16.210333] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.530 [2024-11-28 11:10:16.210344] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.530 [2024-11-28 11:10:16.220519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.530 qpair failed and we were unable to recover it. 00:37:37.530 [2024-11-28 11:10:16.230216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.530 [2024-11-28 11:10:16.230279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.530 [2024-11-28 11:10:16.230304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.530 [2024-11-28 11:10:16.230318] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.530 [2024-11-28 11:10:16.230331] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:37.530 [2024-11-28 11:10:16.240482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:37.530 qpair failed and we were unable to recover it. 00:37:37.530 [2024-11-28 11:10:16.250510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.530 [2024-11-28 11:10:16.250581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.530 [2024-11-28 11:10:16.250633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.530 [2024-11-28 11:10:16.250656] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.531 [2024-11-28 11:10:16.250676] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:37:37.531 [2024-11-28 11:10:16.260764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:37.531 qpair failed and we were unable to recover it. 00:37:37.531 [2024-11-28 11:10:16.270403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.531 [2024-11-28 11:10:16.270474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.531 [2024-11-28 11:10:16.270502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.531 [2024-11-28 11:10:16.270520] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.531 [2024-11-28 11:10:16.270535] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:37:37.531 [2024-11-28 11:10:16.280376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:37.531 qpair failed and we were unable to recover it. 00:37:37.531 [2024-11-28 11:10:16.290565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.531 [2024-11-28 11:10:16.290644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.531 [2024-11-28 11:10:16.290684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.531 [2024-11-28 11:10:16.290706] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.531 [2024-11-28 11:10:16.290725] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:37:37.531 [2024-11-28 11:10:16.300896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:37.531 qpair failed and we were unable to recover it. 00:37:37.531 [2024-11-28 11:10:16.310415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:37.531 [2024-11-28 11:10:16.310480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:37.531 [2024-11-28 11:10:16.310506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:37.531 [2024-11-28 11:10:16.310524] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:37.531 [2024-11-28 11:10:16.310537] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:37:37.531 [2024-11-28 11:10:16.320947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:37.531 qpair failed and we were unable to recover it. 00:37:37.531 [2024-11-28 11:10:16.321261] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:37:37.531 A controller has encountered a failure and is being reset. 00:37:37.531 [2024-11-28 11:10:16.321436] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:37:37.531 [2024-11-28 11:10:16.366614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:37:37.531 Controller properly reset. 00:37:37.790 Initializing NVMe Controllers 00:37:37.790 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:37.790 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:37.790 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:37.790 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:37.790 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:37.790 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:37.790 Initialization complete. Launching workers. 00:37:37.790 Starting thread on core 1 00:37:37.790 Starting thread on core 2 00:37:37.790 Starting thread on core 3 00:37:37.790 Starting thread on core 0 00:37:37.790 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:37.790 00:37:37.790 real 0m12.096s 00:37:37.790 user 0m26.625s 00:37:37.790 sys 0m2.699s 00:37:37.790 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:37.790 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:37.790 ************************************ 00:37:37.790 END TEST nvmf_target_disconnect_tc2 00:37:37.790 ************************************ 00:37:37.790 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:37:37.790 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:37:37.790 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:37.791 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:37.791 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:38.050 ************************************ 00:37:38.050 START TEST nvmf_target_disconnect_tc3 00:37:38.050 ************************************ 00:37:38.050 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:37:38.050 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1937514 00:37:38.050 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:37:38.051 11:10:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:37:39.962 11:10:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1936290 00:37:39.962 11:10:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Read completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 Write completed with error (sct=0, sc=8) 00:37:41.340 starting I/O failed 00:37:41.340 [2024-11-28 11:10:20.009426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:37:41.908 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1936290 Killed "${NVMF_APP[@]}" "$@" 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1938177 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1938177 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1938177 ']' 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.908 11:10:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:42.168 [2024-11-28 11:10:20.801549] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:37:42.168 [2024-11-28 11:10:20.801660] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:42.168 [2024-11-28 11:10:20.981799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Write completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 Read completed with error (sct=0, sc=8) 00:37:42.168 starting I/O failed 00:37:42.168 [2024-11-28 11:10:21.014904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:37:42.428 [2024-11-28 11:10:21.087484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:42.428 [2024-11-28 11:10:21.087532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:42.428 [2024-11-28 11:10:21.087545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:42.428 [2024-11-28 11:10:21.087558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:42.428 [2024-11-28 11:10:21.087568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:42.428 [2024-11-28 11:10:21.090274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:42.428 [2024-11-28 11:10:21.090368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:42.428 [2024-11-28 11:10:21.090433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:42.428 [2024-11-28 11:10:21.090458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:42.996 Malloc0 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.996 11:10:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:42.996 [2024-11-28 11:10:21.759275] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7f744110d940) succeed. 00:37:42.996 [2024-11-28 11:10:21.769000] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7f7440f48940) succeed. 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Write completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.256 Read completed with error (sct=0, sc=8) 00:37:43.256 starting I/O failed 00:37:43.257 Read completed with error (sct=0, sc=8) 00:37:43.257 starting I/O failed 00:37:43.257 Write completed with error (sct=0, sc=8) 00:37:43.257 starting I/O failed 00:37:43.257 Read completed with error (sct=0, sc=8) 00:37:43.257 starting I/O failed 00:37:43.257 Read completed with error (sct=0, sc=8) 00:37:43.257 starting I/O failed 00:37:43.257 Read completed with error (sct=0, sc=8) 00:37:43.257 starting I/O failed 00:37:43.257 Read completed with error (sct=0, sc=8) 00:37:43.257 starting I/O failed 00:37:43.257 Write completed with error (sct=0, sc=8) 00:37:43.257 starting I/O failed 00:37:43.257 Write completed with error (sct=0, sc=8) 00:37:43.257 starting I/O failed 00:37:43.257 Write completed with error (sct=0, sc=8) 00:37:43.257 starting I/O failed 00:37:43.257 [2024-11-28 11:10:22.020554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:37:43.257 [2024-11-28 11:10:22.022561] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:43.257 [2024-11-28 11:10:22.022592] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:43.257 [2024-11-28 11:10:22.022604] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:43.257 [2024-11-28 11:10:22.054400] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.257 11:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1937514 00:37:44.193 [2024-11-28 11:10:23.026850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.193 qpair failed and we were unable to recover it. 00:37:44.193 [2024-11-28 11:10:23.028646] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:44.193 [2024-11-28 11:10:23.028678] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:44.193 [2024-11-28 11:10:23.028691] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:37:45.573 [2024-11-28 11:10:24.036794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:37:45.573 qpair failed and we were unable to recover it. 00:37:45.573 [2024-11-28 11:10:24.038789] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:45.573 [2024-11-28 11:10:24.038822] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:45.573 [2024-11-28 11:10:24.038835] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:37:46.510 [2024-11-28 11:10:25.043121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-28 11:10:25.044918] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:46.510 [2024-11-28 11:10:25.044953] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:46.510 [2024-11-28 11:10:25.044966] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:37:47.447 [2024-11-28 11:10:26.048942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:37:47.447 qpair failed and we were unable to recover it. 00:37:47.447 [2024-11-28 11:10:26.050969] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:47.447 [2024-11-28 11:10:26.050999] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:47.447 [2024-11-28 11:10:26.051012] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:37:48.385 [2024-11-28 11:10:27.055050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:37:48.385 qpair failed and we were unable to recover it. 00:37:48.385 [2024-11-28 11:10:27.056984] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:48.385 [2024-11-28 11:10:27.057014] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:48.385 [2024-11-28 11:10:27.057028] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:37:49.322 [2024-11-28 11:10:28.061383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:37:49.322 qpair failed and we were unable to recover it. 00:37:49.322 [2024-11-28 11:10:28.063730] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:49.322 [2024-11-28 11:10:28.063778] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:49.322 [2024-11-28 11:10:28.063796] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:37:50.261 [2024-11-28 11:10:29.067900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:37:50.261 qpair failed and we were unable to recover it. 00:37:50.261 [2024-11-28 11:10:29.069881] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:50.261 [2024-11-28 11:10:29.069909] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:50.261 [2024-11-28 11:10:29.069922] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:37:51.199 [2024-11-28 11:10:30.074168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:37:51.199 qpair failed and we were unable to recover it. 00:37:51.199 [2024-11-28 11:10:30.074481] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:37:51.199 A controller has encountered a failure and is being reset. 00:37:51.199 Resorting to new failover address 192.168.100.9 00:37:51.199 [2024-11-28 11:10:30.076605] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:51.199 [2024-11-28 11:10:30.076643] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:51.199 [2024-11-28 11:10:30.076657] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:37:52.578 [2024-11-28 11:10:31.080972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:37:52.578 qpair failed and we were unable to recover it. 00:37:52.578 [2024-11-28 11:10:31.082909] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:52.578 [2024-11-28 11:10:31.082939] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:52.578 [2024-11-28 11:10:31.082953] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:37:53.517 [2024-11-28 11:10:32.087091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:37:53.517 qpair failed and we were unable to recover it. 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Write completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 Read completed with error (sct=0, sc=8) 00:37:54.455 starting I/O failed 00:37:54.455 [2024-11-28 11:10:33.092961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:37:54.455 [2024-11-28 11:10:33.094731] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:54.455 [2024-11-28 11:10:33.094761] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:54.455 [2024-11-28 11:10:33.094774] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:55.393 [2024-11-28 11:10:34.099133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:37:55.393 qpair failed and we were unable to recover it. 00:37:55.393 [2024-11-28 11:10:34.101200] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:55.393 [2024-11-28 11:10:34.101229] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:55.393 [2024-11-28 11:10:34.101242] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:37:56.330 [2024-11-28 11:10:35.105301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:37:56.330 qpair failed and we were unable to recover it. 00:37:56.330 [2024-11-28 11:10:35.105581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.330 [2024-11-28 11:10:35.105765] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:37:56.330 [2024-11-28 11:10:35.151144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:56.330 Controller properly reset. 00:37:56.589 Initializing NVMe Controllers 00:37:56.589 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:56.589 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:56.589 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:56.589 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:56.589 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:56.589 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:56.589 Initialization complete. Launching workers. 00:37:56.589 Starting thread on core 1 00:37:56.589 Starting thread on core 2 00:37:56.589 Starting thread on core 3 00:37:56.589 Starting thread on core 0 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:37:56.589 00:37:56.589 real 0m18.688s 00:37:56.589 user 1m0.507s 00:37:56.589 sys 0m4.852s 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:56.589 ************************************ 00:37:56.589 END TEST nvmf_target_disconnect_tc3 00:37:56.589 ************************************ 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:37:56.589 rmmod nvme_rdma 00:37:56.589 rmmod nvme_fabrics 00:37:56.589 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1938177 ']' 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1938177 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1938177 ']' 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1938177 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1938177 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1938177' 00:37:56.847 killing process with pid 1938177 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1938177 00:37:56.847 11:10:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1938177 00:37:58.749 11:10:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:58.749 11:10:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:37:58.749 00:37:58.749 real 0m42.698s 00:37:58.749 user 2m33.251s 00:37:58.749 sys 0m15.005s 00:37:58.749 11:10:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.749 11:10:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:58.749 ************************************ 00:37:58.749 END TEST nvmf_target_disconnect 00:37:58.749 ************************************ 00:37:58.749 11:10:37 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:58.749 00:37:58.749 real 8m22.445s 00:37:58.749 user 23m1.436s 00:37:58.749 sys 2m6.760s 00:37:58.749 11:10:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.749 11:10:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.749 ************************************ 00:37:58.749 END TEST nvmf_host 00:37:58.749 ************************************ 00:37:58.749 11:10:37 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:37:58.749 00:37:58.749 real 30m52.150s 00:37:58.749 user 87m8.429s 00:37:58.749 sys 7m50.742s 00:37:58.749 11:10:37 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.749 11:10:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:58.749 ************************************ 00:37:58.749 END TEST nvmf_rdma 00:37:58.749 ************************************ 00:37:58.749 11:10:37 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:37:58.749 11:10:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:58.749 11:10:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:58.749 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:37:58.749 ************************************ 00:37:58.749 START TEST spdkcli_nvmf_rdma 00:37:58.749 ************************************ 00:37:58.749 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:37:59.009 * Looking for test storage... 00:37:59.009 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:59.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.009 --rc genhtml_branch_coverage=1 00:37:59.009 --rc genhtml_function_coverage=1 00:37:59.009 --rc genhtml_legend=1 00:37:59.009 --rc geninfo_all_blocks=1 00:37:59.009 --rc geninfo_unexecuted_blocks=1 00:37:59.009 00:37:59.009 ' 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:59.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.009 --rc genhtml_branch_coverage=1 00:37:59.009 --rc genhtml_function_coverage=1 00:37:59.009 --rc genhtml_legend=1 00:37:59.009 --rc geninfo_all_blocks=1 00:37:59.009 --rc geninfo_unexecuted_blocks=1 00:37:59.009 00:37:59.009 ' 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:59.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.009 --rc genhtml_branch_coverage=1 00:37:59.009 --rc genhtml_function_coverage=1 00:37:59.009 --rc genhtml_legend=1 00:37:59.009 --rc geninfo_all_blocks=1 00:37:59.009 --rc geninfo_unexecuted_blocks=1 00:37:59.009 00:37:59.009 ' 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:59.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.009 --rc genhtml_branch_coverage=1 00:37:59.009 --rc genhtml_function_coverage=1 00:37:59.009 --rc genhtml_legend=1 00:37:59.009 --rc geninfo_all_blocks=1 00:37:59.009 --rc geninfo_unexecuted_blocks=1 00:37:59.009 00:37:59.009 ' 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:59.009 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:59.010 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1940999 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1940999 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 1940999 ']' 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:59.010 11:10:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:59.010 [2024-11-28 11:10:37.874421] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:37:59.010 [2024-11-28 11:10:37.874517] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940999 ] 00:37:59.269 [2024-11-28 11:10:38.027750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:59.269 [2024-11-28 11:10:38.123503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:59.269 [2024-11-28 11:10:38.123511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:37:59.838 11:10:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:38:09.819 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:38:09.819 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:38:09.819 Found net devices under 0000:d9:00.0: mlx_0_0 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:38:09.819 Found net devices under 0000:d9:00.1: mlx_0_1 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:38:09.819 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:38:09.820 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:09.820 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:38:09.820 altname enp217s0f0np0 00:38:09.820 altname ens818f0np0 00:38:09.820 inet 192.168.100.8/24 scope global mlx_0_0 00:38:09.820 valid_lft forever preferred_lft forever 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:38:09.820 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:09.820 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:38:09.820 altname enp217s0f1np1 00:38:09.820 altname ens818f1np1 00:38:09.820 inet 192.168.100.9/24 scope global mlx_0_1 00:38:09.820 valid_lft forever preferred_lft forever 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:38:09.820 192.168.100.9' 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:38:09.820 192.168.100.9' 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:38:09.820 192.168.100.9' 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:38:09.820 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:38:09.821 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:38:09.821 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:38:09.821 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:38:09.821 11:10:47 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:38:09.821 11:10:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:38:09.821 11:10:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:09.821 11:10:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:09.821 11:10:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:09.821 11:10:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:09.821 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:09.821 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:09.821 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:09.821 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:09.821 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:09.821 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:09.821 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:38:09.821 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:38:09.821 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:09.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:09.821 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:09.821 ' 00:38:11.199 [2024-11-28 11:10:49.983956] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002af40/0x7f90d0ea6940) succeed. 00:38:11.199 [2024-11-28 11:10:49.994284] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002b0c0/0x7f90d0e62940) succeed. 00:38:12.579 [2024-11-28 11:10:51.347662] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:38:15.114 [2024-11-28 11:10:53.594818] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:38:17.132 [2024-11-28 11:10:55.525233] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:38:18.512 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:18.512 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:18.512 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:18.512 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:18.512 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:18.512 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:18.512 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:18.512 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:38:18.512 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:38:18.512 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:18.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:18.512 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:18.512 11:10:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:18.512 11:10:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:18.512 11:10:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:18.512 11:10:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:18.512 11:10:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:18.512 11:10:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:18.512 11:10:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:38:18.512 11:10:57 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:18.772 11:10:57 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:18.772 11:10:57 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:18.772 11:10:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:18.772 11:10:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:18.772 11:10:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:18.772 11:10:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:18.772 11:10:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:18.772 11:10:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:18.772 11:10:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:18.772 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:18.772 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:18.772 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:18.772 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:38:18.772 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:38:18.772 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:18.772 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:18.772 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:18.772 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:18.772 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:18.772 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:18.772 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:18.772 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:18.772 ' 00:38:25.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:25.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:25.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:25.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:25.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:38:25.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:38:25.349 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:25.349 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:25.349 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:25.349 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:25.349 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:25.349 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:25.349 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:25.349 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:25.349 11:11:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:25.349 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:25.349 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:25.349 11:11:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1940999 00:38:25.349 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 1940999 ']' 00:38:25.350 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 1940999 00:38:25.350 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:38:25.350 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:25.350 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1940999 00:38:25.350 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:25.350 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:25.350 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1940999' 00:38:25.350 killing process with pid 1940999 00:38:25.350 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 1940999 00:38:25.350 11:11:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 1940999 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:38:25.919 rmmod nvme_rdma 00:38:25.919 rmmod nvme_fabrics 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:38:25.919 00:38:25.919 real 0m27.240s 00:38:25.919 user 0m57.241s 00:38:25.919 sys 0m7.673s 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:25.919 11:11:04 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:25.919 ************************************ 00:38:25.919 END TEST spdkcli_nvmf_rdma 00:38:25.919 ************************************ 00:38:26.178 11:11:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:26.178 11:11:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:26.178 11:11:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:26.178 11:11:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:26.178 11:11:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:26.178 11:11:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:26.178 11:11:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:26.178 11:11:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:26.178 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:38:26.178 11:11:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:26.178 11:11:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:26.178 11:11:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:26.178 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:38:32.743 INFO: APP EXITING 00:38:32.743 INFO: killing all VMs 00:38:32.743 INFO: killing vhost app 00:38:32.743 INFO: EXIT DONE 00:38:36.041 Waiting for block devices as requested 00:38:36.041 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:36.041 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:36.041 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:36.041 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:36.041 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:36.041 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:36.041 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:36.301 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:36.301 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:36.301 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:36.559 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:36.559 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:36.559 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:36.817 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:36.817 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:36.817 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:37.076 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:41.273 Cleaning 00:38:41.273 Removing: /var/run/dpdk/spdk0/config 00:38:41.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:41.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:41.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:41.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:41.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:41.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:41.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:41.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:41.273 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:41.273 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:41.273 Removing: /var/run/dpdk/spdk1/config 00:38:41.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:41.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:41.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:41.274 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:41.274 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:41.274 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:41.274 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:41.274 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:41.274 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:41.274 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:41.274 Removing: /var/run/dpdk/spdk1/mp_socket 00:38:41.274 Removing: /var/run/dpdk/spdk2/config 00:38:41.274 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:41.274 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:41.274 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:41.274 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:41.274 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:41.274 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:41.274 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:41.274 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:41.274 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:41.274 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:41.274 Removing: /var/run/dpdk/spdk3/config 00:38:41.274 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:41.274 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:41.274 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:41.274 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:41.274 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:41.274 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:41.274 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:41.274 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:41.274 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:41.274 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:41.274 Removing: /var/run/dpdk/spdk4/config 00:38:41.274 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:41.274 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:41.274 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:41.274 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:41.274 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:41.274 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:41.274 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:41.274 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:41.274 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:41.274 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:41.274 Removing: /dev/shm/bdevperf_trace.pid1526674 00:38:41.274 Removing: /dev/shm/bdev_svc_trace.1 00:38:41.274 Removing: /dev/shm/nvmf_trace.0 00:38:41.274 Removing: /dev/shm/spdk_tgt_trace.pid1466595 00:38:41.274 Removing: /var/run/dpdk/spdk0 00:38:41.274 Removing: /var/run/dpdk/spdk1 00:38:41.274 Removing: /var/run/dpdk/spdk2 00:38:41.274 Removing: /var/run/dpdk/spdk3 00:38:41.274 Removing: /var/run/dpdk/spdk4 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1462266 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1464043 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1466595 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1467588 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1468945 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1469509 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1470906 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1471174 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1471840 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1477949 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1479675 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1480288 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1481132 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1481754 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1482619 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1482911 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1483198 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1483525 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1484547 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1488076 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1488659 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1489472 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1489508 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1491932 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1492088 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1493837 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1494090 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1494672 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1494855 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1495416 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1495532 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1497110 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1497424 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1497847 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1503038 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1508310 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1519761 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1520593 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1526674 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1526959 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1532511 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1540218 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1543233 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1555793 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1585999 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1591063 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1695766 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1701865 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1708586 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1720760 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1753094 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1758576 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1805094 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1806938 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1808756 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1810516 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1816346 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1824153 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1833316 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1834405 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1835600 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1837085 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1837615 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1843313 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1843398 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1848948 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1849485 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1850019 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1850822 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1850960 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1853384 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1855323 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1857191 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1859043 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1860901 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1862764 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1869901 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1870544 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1872998 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1874952 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1883359 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1886291 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1892979 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1904487 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1904520 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1927760 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1928251 00:38:41.274 Removing: /var/run/dpdk/spdk_pid1935158 00:38:41.533 Removing: /var/run/dpdk/spdk_pid1935730 00:38:41.533 Removing: /var/run/dpdk/spdk_pid1937514 00:38:41.533 Removing: /var/run/dpdk/spdk_pid1940999 00:38:41.533 Clean 00:38:41.533 11:11:20 -- common/autotest_common.sh@1453 -- # return 0 00:38:41.533 11:11:20 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:41.533 11:11:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:41.533 11:11:20 -- common/autotest_common.sh@10 -- # set +x 00:38:41.533 11:11:20 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:41.533 11:11:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:41.533 11:11:20 -- common/autotest_common.sh@10 -- # set +x 00:38:41.533 11:11:20 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:38:41.533 11:11:20 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:38:41.533 11:11:20 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:38:41.533 11:11:20 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:41.533 11:11:20 -- spdk/autotest.sh@398 -- # hostname 00:38:41.533 11:11:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:38:41.791 geninfo: WARNING: invalid characters removed from testname! 00:39:03.733 11:11:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:05.113 11:11:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:06.489 11:11:45 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:08.395 11:11:46 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:10.301 11:11:48 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:11.679 11:11:50 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:13.586 11:11:52 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:13.586 11:11:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:13.586 11:11:52 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:39:13.586 11:11:52 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:13.586 11:11:52 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:13.586 11:11:52 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:39:13.586 + [[ -n 1381204 ]] 00:39:13.586 + sudo kill 1381204 00:39:13.597 [Pipeline] } 00:39:13.611 [Pipeline] // stage 00:39:13.617 [Pipeline] } 00:39:13.629 [Pipeline] // timeout 00:39:13.636 [Pipeline] } 00:39:13.648 [Pipeline] // catchError 00:39:13.653 [Pipeline] } 00:39:13.664 [Pipeline] // wrap 00:39:13.670 [Pipeline] } 00:39:13.681 [Pipeline] // catchError 00:39:13.689 [Pipeline] stage 00:39:13.691 [Pipeline] { (Epilogue) 00:39:13.704 [Pipeline] catchError 00:39:13.706 [Pipeline] { 00:39:13.719 [Pipeline] echo 00:39:13.721 Cleanup processes 00:39:13.727 [Pipeline] sh 00:39:14.016 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:39:14.016 1962716 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:39:14.030 [Pipeline] sh 00:39:14.316 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:39:14.316 ++ grep -v 'sudo pgrep' 00:39:14.316 ++ awk '{print $1}' 00:39:14.316 + sudo kill -9 00:39:14.316 + true 00:39:14.329 [Pipeline] sh 00:39:14.616 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:14.616 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:39:21.188 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:39:24.491 [Pipeline] sh 00:39:24.776 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:24.776 Artifacts sizes are good 00:39:24.792 [Pipeline] archiveArtifacts 00:39:24.799 Archiving artifacts 00:39:24.944 [Pipeline] sh 00:39:25.301 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:39:25.316 [Pipeline] cleanWs 00:39:25.326 [WS-CLEANUP] Deleting project workspace... 00:39:25.326 [WS-CLEANUP] Deferred wipeout is used... 00:39:25.332 [WS-CLEANUP] done 00:39:25.334 [Pipeline] } 00:39:25.351 [Pipeline] // catchError 00:39:25.363 [Pipeline] sh 00:39:25.646 + logger -p user.info -t JENKINS-CI 00:39:25.655 [Pipeline] } 00:39:25.669 [Pipeline] // stage 00:39:25.675 [Pipeline] } 00:39:25.691 [Pipeline] // node 00:39:25.697 [Pipeline] End of Pipeline 00:39:25.735 Finished: SUCCESS